00:00:00.001 Started by upstream project "autotest-per-patch" build number 132365 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.118 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.119 The recommended git tool is: git 00:00:00.119 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.179 Fetching changes from the remote Git repository 00:00:00.182 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.229 > git --version # timeout=10 00:00:00.264 > git --version # 'git version 2.39.2' 00:00:00.264 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.286 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.286 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.778 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.791 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.802 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.802 > git config core.sparsecheckout # timeout=10 00:00:06.815 > git read-tree -mu HEAD # timeout=10 00:00:06.833 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.855 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.855 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.952 [Pipeline] Start of Pipeline 00:00:06.963 [Pipeline] library 00:00:06.964 Loading library shm_lib@master 00:00:06.964 Library shm_lib@master is cached. Copying from home. 00:00:06.977 [Pipeline] node 00:00:06.985 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:06.986 [Pipeline] { 00:00:06.994 [Pipeline] catchError 00:00:06.995 [Pipeline] { 00:00:07.008 [Pipeline] wrap 00:00:07.016 [Pipeline] { 00:00:07.025 [Pipeline] stage 00:00:07.026 [Pipeline] { (Prologue) 00:00:07.043 [Pipeline] echo 00:00:07.045 Node: VM-host-SM17 00:00:07.051 [Pipeline] cleanWs 00:00:07.060 [WS-CLEANUP] Deleting project workspace... 00:00:07.060 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.066 [WS-CLEANUP] done 00:00:07.260 [Pipeline] setCustomBuildProperty 00:00:07.342 [Pipeline] httpRequest 00:00:07.694 [Pipeline] echo 00:00:07.696 Sorcerer 10.211.164.20 is alive 00:00:07.705 [Pipeline] retry 00:00:07.707 [Pipeline] { 00:00:07.720 [Pipeline] httpRequest 00:00:07.724 HttpMethod: GET 00:00:07.725 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.726 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.744 Response Code: HTTP/1.1 200 OK 00:00:07.745 Success: Status code 200 is in the accepted range: 200,404 00:00:07.745 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.733 [Pipeline] } 00:00:09.746 [Pipeline] // retry 00:00:09.754 [Pipeline] sh 00:00:10.033 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.046 [Pipeline] httpRequest 00:00:10.362 [Pipeline] echo 00:00:10.363 Sorcerer 10.211.164.20 is alive 00:00:10.373 [Pipeline] retry 00:00:10.375 [Pipeline] { 00:00:10.387 [Pipeline] httpRequest 00:00:10.392 HttpMethod: GET 00:00:10.392 URL: http://10.211.164.20/packages/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:00:10.393 Sending request to url: http://10.211.164.20/packages/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:00:10.407 Response Code: HTTP/1.1 200 OK 00:00:10.407 Success: Status code 200 is in the accepted range: 200,404 00:00:10.408 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:02:27.662 [Pipeline] } 00:02:27.678 [Pipeline] // retry 00:02:27.685 [Pipeline] sh 00:02:27.967 + tar --no-same-owner -xf spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:02:31.264 [Pipeline] sh 00:02:31.583 + git -C spdk log --oneline -n5 00:02:31.583 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:02:31.583 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:02:31.583 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:02:31.583 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:02:31.583 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:02:31.616 [Pipeline] writeFile 00:02:31.631 [Pipeline] sh 00:02:31.911 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:31.923 [Pipeline] sh 00:02:32.204 + cat autorun-spdk.conf 00:02:32.204 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:32.204 SPDK_TEST_NVMF=1 00:02:32.204 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:32.204 SPDK_TEST_URING=1 00:02:32.204 SPDK_TEST_USDT=1 00:02:32.204 SPDK_RUN_UBSAN=1 00:02:32.204 NET_TYPE=virt 00:02:32.204 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:32.211 RUN_NIGHTLY=0 00:02:32.214 [Pipeline] } 00:02:32.227 [Pipeline] // stage 00:02:32.243 [Pipeline] stage 00:02:32.245 [Pipeline] { (Run VM) 00:02:32.257 [Pipeline] sh 00:02:32.539 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:32.539 + echo 'Start stage prepare_nvme.sh' 00:02:32.539 Start stage prepare_nvme.sh 00:02:32.539 + [[ -n 4 ]] 00:02:32.539 + disk_prefix=ex4 00:02:32.539 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 ]] 00:02:32.539 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf ]] 00:02:32.539 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf 00:02:32.539 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:32.539 ++ SPDK_TEST_NVMF=1 00:02:32.539 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:32.539 ++ SPDK_TEST_URING=1 00:02:32.539 ++ SPDK_TEST_USDT=1 00:02:32.539 ++ SPDK_RUN_UBSAN=1 00:02:32.539 ++ NET_TYPE=virt 00:02:32.539 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:32.539 ++ RUN_NIGHTLY=0 00:02:32.539 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:02:32.539 + nvme_files=() 00:02:32.539 + declare -A nvme_files 00:02:32.539 + backend_dir=/var/lib/libvirt/images/backends 00:02:32.539 + nvme_files['nvme.img']=5G 00:02:32.539 + nvme_files['nvme-cmb.img']=5G 00:02:32.539 + nvme_files['nvme-multi0.img']=4G 00:02:32.539 + nvme_files['nvme-multi1.img']=4G 00:02:32.539 + nvme_files['nvme-multi2.img']=4G 00:02:32.539 + nvme_files['nvme-openstack.img']=8G 00:02:32.539 + nvme_files['nvme-zns.img']=5G 00:02:32.539 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:32.539 + (( SPDK_TEST_FTL == 1 )) 00:02:32.539 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:32.539 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:32.539 + for nvme in "${!nvme_files[@]}" 00:02:32.539 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:02:32.539 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:32.539 + for nvme in "${!nvme_files[@]}" 00:02:32.539 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:02:32.539 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:32.539 + for nvme in "${!nvme_files[@]}" 00:02:32.539 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:02:32.539 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:32.539 + for nvme in "${!nvme_files[@]}" 00:02:32.539 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:02:32.539 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:32.539 + for nvme in "${!nvme_files[@]}" 00:02:32.539 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:02:32.539 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:32.539 + for nvme in "${!nvme_files[@]}" 00:02:32.539 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:02:32.539 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:32.539 + for nvme in "${!nvme_files[@]}" 00:02:32.539 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:02:33.488 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:33.488 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:02:33.488 + echo 'End stage prepare_nvme.sh' 00:02:33.488 End stage prepare_nvme.sh 00:02:33.508 [Pipeline] sh 00:02:33.789 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:33.789 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:02:33.789 00:02:33.789 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant 00:02:33.789 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk 00:02:33.789 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:02:33.789 HELP=0 00:02:33.789 DRY_RUN=0 00:02:33.789 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:02:33.789 NVME_DISKS_TYPE=nvme,nvme, 00:02:33.789 NVME_AUTO_CREATE=0 00:02:33.789 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:02:33.789 NVME_CMB=,, 00:02:33.789 NVME_PMR=,, 00:02:33.789 NVME_ZNS=,, 00:02:33.789 NVME_MS=,, 00:02:33.789 NVME_FDP=,, 00:02:33.789 SPDK_VAGRANT_DISTRO=fedora39 00:02:33.789 SPDK_VAGRANT_VMCPU=10 00:02:33.789 SPDK_VAGRANT_VMRAM=12288 00:02:33.789 SPDK_VAGRANT_PROVIDER=libvirt 00:02:33.789 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:33.789 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:33.789 SPDK_OPENSTACK_NETWORK=0 00:02:33.789 VAGRANT_PACKAGE_BOX=0 00:02:33.789 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:02:33.789 FORCE_DISTRO=true 00:02:33.789 VAGRANT_BOX_VERSION= 00:02:33.789 EXTRA_VAGRANTFILES= 00:02:33.789 NIC_MODEL=e1000 00:02:33.789 00:02:33.789 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt' 00:02:33.789 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:02:37.075 Bringing machine 'default' up with 'libvirt' provider... 00:02:37.642 ==> default: Creating image (snapshot of base box volume). 00:02:37.642 ==> default: Creating domain with the following settings... 00:02:37.642 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732091768_f780942ef3f220398a19 00:02:37.642 ==> default: -- Domain type: kvm 00:02:37.642 ==> default: -- Cpus: 10 00:02:37.642 ==> default: -- Feature: acpi 00:02:37.642 ==> default: -- Feature: apic 00:02:37.642 ==> default: -- Feature: pae 00:02:37.642 ==> default: -- Memory: 12288M 00:02:37.642 ==> default: -- Memory Backing: hugepages: 00:02:37.642 ==> default: -- Management MAC: 00:02:37.642 ==> default: -- Loader: 00:02:37.642 ==> default: -- Nvram: 00:02:37.642 ==> default: -- Base box: spdk/fedora39 00:02:37.642 ==> default: -- Storage pool: default 00:02:37.642 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732091768_f780942ef3f220398a19.img (20G) 00:02:37.642 ==> default: -- Volume Cache: default 00:02:37.642 ==> default: -- Kernel: 00:02:37.642 ==> default: -- Initrd: 00:02:37.642 ==> default: -- Graphics Type: vnc 00:02:37.642 ==> default: -- Graphics Port: -1 00:02:37.642 ==> default: -- Graphics IP: 127.0.0.1 00:02:37.642 ==> default: -- Graphics Password: Not defined 00:02:37.642 ==> default: -- Video Type: cirrus 00:02:37.642 ==> default: -- Video VRAM: 9216 00:02:37.642 ==> default: -- Sound Type: 00:02:37.642 ==> default: -- Keymap: en-us 00:02:37.642 ==> default: -- TPM Path: 00:02:37.642 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:37.642 ==> default: -- Command line args: 00:02:37.642 ==> default: -> value=-device, 00:02:37.642 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:37.642 ==> default: -> value=-drive, 00:02:37.642 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:02:37.642 ==> default: -> value=-device, 00:02:37.642 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:37.642 ==> default: -> value=-device, 00:02:37.642 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:37.642 ==> default: -> value=-drive, 00:02:37.642 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:37.642 ==> default: -> value=-device, 00:02:37.642 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:37.642 ==> default: -> value=-drive, 00:02:37.642 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:37.642 ==> default: -> value=-device, 00:02:37.642 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:37.642 ==> default: -> value=-drive, 00:02:37.642 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:37.642 ==> default: -> value=-device, 00:02:37.642 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:37.901 ==> default: Creating shared folders metadata... 00:02:37.901 ==> default: Starting domain. 00:02:39.279 ==> default: Waiting for domain to get an IP address... 00:02:57.410 ==> default: Waiting for SSH to become available... 00:02:57.410 ==> default: Configuring and enabling network interfaces... 00:03:00.696 default: SSH address: 192.168.121.216:22 00:03:00.696 default: SSH username: vagrant 00:03:00.696 default: SSH auth method: private key 00:03:02.603 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:10.730 ==> default: Mounting SSHFS shared folder... 00:03:12.638 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:12.638 ==> default: Checking Mount.. 00:03:13.575 ==> default: Folder Successfully Mounted! 00:03:13.575 ==> default: Running provisioner: file... 00:03:14.512 default: ~/.gitconfig => .gitconfig 00:03:15.079 00:03:15.079 SUCCESS! 00:03:15.079 00:03:15.079 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:03:15.079 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:15.079 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:03:15.079 00:03:15.088 [Pipeline] } 00:03:15.104 [Pipeline] // stage 00:03:15.113 [Pipeline] dir 00:03:15.114 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt 00:03:15.116 [Pipeline] { 00:03:15.129 [Pipeline] catchError 00:03:15.131 [Pipeline] { 00:03:15.145 [Pipeline] sh 00:03:15.426 + vagrant ssh-config --host vagrant 00:03:15.426 + sed -ne /^Host/,$p 00:03:15.426 + tee ssh_conf 00:03:18.715 Host vagrant 00:03:18.715 HostName 192.168.121.216 00:03:18.715 User vagrant 00:03:18.715 Port 22 00:03:18.715 UserKnownHostsFile /dev/null 00:03:18.715 StrictHostKeyChecking no 00:03:18.715 PasswordAuthentication no 00:03:18.715 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:18.715 IdentitiesOnly yes 00:03:18.715 LogLevel FATAL 00:03:18.715 ForwardAgent yes 00:03:18.715 ForwardX11 yes 00:03:18.715 00:03:18.730 [Pipeline] withEnv 00:03:18.732 [Pipeline] { 00:03:18.746 [Pipeline] sh 00:03:19.028 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:19.028 source /etc/os-release 00:03:19.028 [[ -e /image.version ]] && img=$(< /image.version) 00:03:19.028 # Minimal, systemd-like check. 00:03:19.028 if [[ -e /.dockerenv ]]; then 00:03:19.028 # Clear garbage from the node's name: 00:03:19.028 # agt-er_autotest_547-896 -> autotest_547-896 00:03:19.028 # $HOSTNAME is the actual container id 00:03:19.028 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:19.028 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:19.028 # We can assume this is a mount from a host where container is running, 00:03:19.028 # so fetch its hostname to easily identify the target swarm worker. 00:03:19.028 container="$(< /etc/hostname) ($agent)" 00:03:19.028 else 00:03:19.028 # Fallback 00:03:19.028 container=$agent 00:03:19.028 fi 00:03:19.028 fi 00:03:19.028 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:19.028 00:03:19.298 [Pipeline] } 00:03:19.316 [Pipeline] // withEnv 00:03:19.324 [Pipeline] setCustomBuildProperty 00:03:19.337 [Pipeline] stage 00:03:19.339 [Pipeline] { (Tests) 00:03:19.355 [Pipeline] sh 00:03:19.638 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:19.909 [Pipeline] sh 00:03:20.191 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:20.528 [Pipeline] timeout 00:03:20.529 Timeout set to expire in 1 hr 0 min 00:03:20.531 [Pipeline] { 00:03:20.544 [Pipeline] sh 00:03:20.824 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:21.392 HEAD is now at 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:03:21.405 [Pipeline] sh 00:03:21.686 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:21.959 [Pipeline] sh 00:03:22.238 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:22.514 [Pipeline] sh 00:03:22.799 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:03:23.057 ++ readlink -f spdk_repo 00:03:23.057 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:23.057 + [[ -n /home/vagrant/spdk_repo ]] 00:03:23.057 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:23.057 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:23.057 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:23.057 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:23.057 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:23.057 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:03:23.057 + cd /home/vagrant/spdk_repo 00:03:23.057 + source /etc/os-release 00:03:23.057 ++ NAME='Fedora Linux' 00:03:23.057 ++ VERSION='39 (Cloud Edition)' 00:03:23.057 ++ ID=fedora 00:03:23.057 ++ VERSION_ID=39 00:03:23.057 ++ VERSION_CODENAME= 00:03:23.057 ++ PLATFORM_ID=platform:f39 00:03:23.057 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:23.057 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:23.057 ++ LOGO=fedora-logo-icon 00:03:23.057 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:23.057 ++ HOME_URL=https://fedoraproject.org/ 00:03:23.057 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:23.057 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:23.057 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:23.057 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:23.057 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:23.057 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:23.057 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:23.057 ++ SUPPORT_END=2024-11-12 00:03:23.057 ++ VARIANT='Cloud Edition' 00:03:23.057 ++ VARIANT_ID=cloud 00:03:23.057 + uname -a 00:03:23.057 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:23.057 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:23.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:23.624 Hugepages 00:03:23.624 node hugesize free / total 00:03:23.624 node0 1048576kB 0 / 0 00:03:23.624 node0 2048kB 0 / 0 00:03:23.624 00:03:23.624 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:23.624 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:23.625 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:23.625 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:23.625 + rm -f /tmp/spdk-ld-path 00:03:23.625 + source autorun-spdk.conf 00:03:23.625 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:23.625 ++ SPDK_TEST_NVMF=1 00:03:23.625 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:23.625 ++ SPDK_TEST_URING=1 00:03:23.625 ++ SPDK_TEST_USDT=1 00:03:23.625 ++ SPDK_RUN_UBSAN=1 00:03:23.625 ++ NET_TYPE=virt 00:03:23.625 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:23.625 ++ RUN_NIGHTLY=0 00:03:23.625 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:23.625 + [[ -n '' ]] 00:03:23.625 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:23.625 + for M in /var/spdk/build-*-manifest.txt 00:03:23.625 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:23.625 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:23.625 + for M in /var/spdk/build-*-manifest.txt 00:03:23.625 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:23.625 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:23.625 + for M in /var/spdk/build-*-manifest.txt 00:03:23.625 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:23.625 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:23.625 ++ uname 00:03:23.625 + [[ Linux == \L\i\n\u\x ]] 00:03:23.625 + sudo dmesg -T 00:03:23.625 + sudo dmesg --clear 00:03:23.625 + dmesg_pid=5210 00:03:23.625 + sudo dmesg -Tw 00:03:23.625 + [[ Fedora Linux == FreeBSD ]] 00:03:23.625 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:23.625 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:23.625 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:23.625 + [[ -x /usr/src/fio-static/fio ]] 00:03:23.625 + export FIO_BIN=/usr/src/fio-static/fio 00:03:23.625 + FIO_BIN=/usr/src/fio-static/fio 00:03:23.625 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:23.625 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:23.625 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:23.625 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:23.625 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:23.625 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:23.625 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:23.625 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:23.625 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:23.625 08:36:54 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:23.625 08:36:54 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:23.625 08:36:54 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:23.625 08:36:54 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:23.625 08:36:54 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:23.625 08:36:54 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:03:23.625 08:36:54 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:03:23.625 08:36:54 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:23.625 08:36:54 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:03:23.625 08:36:54 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:23.625 08:36:54 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:23.625 08:36:54 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:23.625 08:36:54 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:23.884 08:36:54 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:23.884 08:36:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:23.884 08:36:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:23.884 08:36:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:23.884 08:36:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:23.884 08:36:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:23.884 08:36:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.884 08:36:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.884 08:36:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.884 08:36:54 -- paths/export.sh@5 -- $ export PATH 00:03:23.884 08:36:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.884 08:36:54 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:23.884 08:36:54 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:23.884 08:36:54 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732091814.XXXXXX 00:03:23.884 08:36:54 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732091814.MkBmry 00:03:23.884 08:36:54 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:23.884 08:36:54 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:23.884 08:36:54 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:23.884 08:36:54 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:23.884 08:36:54 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:23.884 08:36:54 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:23.884 08:36:54 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:23.884 08:36:54 -- common/autotest_common.sh@10 -- $ set +x 00:03:23.884 08:36:54 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:03:23.884 08:36:54 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:23.884 08:36:54 -- pm/common@17 -- $ local monitor 00:03:23.884 08:36:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.884 08:36:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.884 08:36:54 -- pm/common@25 -- $ sleep 1 00:03:23.884 08:36:54 -- pm/common@21 -- $ date +%s 00:03:23.884 08:36:54 -- pm/common@21 -- $ date +%s 00:03:23.884 08:36:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732091814 00:03:23.884 08:36:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732091814 00:03:23.884 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732091814_collect-cpu-load.pm.log 00:03:23.884 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732091814_collect-vmstat.pm.log 00:03:24.820 08:36:55 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:24.820 08:36:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:24.820 08:36:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:24.820 08:36:55 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:24.820 08:36:55 -- spdk/autobuild.sh@16 -- $ date -u 00:03:24.820 Wed Nov 20 08:36:55 AM UTC 2024 00:03:24.820 08:36:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:24.820 v25.01-pre-200-g6fc96a60f 00:03:24.820 08:36:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:24.820 08:36:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:24.820 08:36:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:24.820 08:36:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:24.820 08:36:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:24.820 08:36:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:24.820 ************************************ 00:03:24.820 START TEST ubsan 00:03:24.820 ************************************ 00:03:24.820 using ubsan 00:03:24.820 08:36:55 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:24.820 00:03:24.820 real 0m0.000s 00:03:24.820 user 0m0.000s 00:03:24.820 sys 0m0.000s 00:03:24.820 08:36:55 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:24.820 ************************************ 00:03:24.820 END TEST ubsan 00:03:24.820 ************************************ 00:03:24.820 08:36:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:24.820 08:36:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:24.820 08:36:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:24.820 08:36:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:24.820 08:36:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:24.820 08:36:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:24.820 08:36:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:24.820 08:36:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:24.820 08:36:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:24.820 08:36:55 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:03:25.079 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:25.079 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:25.338 Using 'verbs' RDMA provider 00:03:41.203 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:53.413 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:53.413 Creating mk/config.mk...done. 00:03:53.413 Creating mk/cc.flags.mk...done. 00:03:53.413 Type 'make' to build. 00:03:53.413 08:37:24 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:53.413 08:37:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:53.413 08:37:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:53.413 08:37:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:53.413 ************************************ 00:03:53.413 START TEST make 00:03:53.413 ************************************ 00:03:53.672 08:37:24 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:53.930 make[1]: Nothing to be done for 'all'. 00:04:06.227 The Meson build system 00:04:06.227 Version: 1.5.0 00:04:06.227 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:06.227 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:06.227 Build type: native build 00:04:06.227 Program cat found: YES (/usr/bin/cat) 00:04:06.227 Project name: DPDK 00:04:06.227 Project version: 24.03.0 00:04:06.227 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:06.227 C linker for the host machine: cc ld.bfd 2.40-14 00:04:06.227 Host machine cpu family: x86_64 00:04:06.227 Host machine cpu: x86_64 00:04:06.228 Message: ## Building in Developer Mode ## 00:04:06.228 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:06.228 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:06.228 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:06.228 Program python3 found: YES (/usr/bin/python3) 00:04:06.228 Program cat found: YES (/usr/bin/cat) 00:04:06.228 Compiler for C supports arguments -march=native: YES 00:04:06.228 Checking for size of "void *" : 8 00:04:06.228 Checking for size of "void *" : 8 (cached) 00:04:06.228 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:06.228 Library m found: YES 00:04:06.228 Library numa found: YES 00:04:06.228 Has header "numaif.h" : YES 00:04:06.228 Library fdt found: NO 00:04:06.228 Library execinfo found: NO 00:04:06.228 Has header "execinfo.h" : YES 00:04:06.228 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:06.228 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:06.228 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:06.228 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:06.228 Run-time dependency openssl found: YES 3.1.1 00:04:06.228 Run-time dependency libpcap found: YES 1.10.4 00:04:06.228 Has header "pcap.h" with dependency libpcap: YES 00:04:06.228 Compiler for C supports arguments -Wcast-qual: YES 00:04:06.228 Compiler for C supports arguments -Wdeprecated: YES 00:04:06.228 Compiler for C supports arguments -Wformat: YES 00:04:06.228 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:06.228 Compiler for C supports arguments -Wformat-security: NO 00:04:06.228 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:06.228 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:06.228 Compiler for C supports arguments -Wnested-externs: YES 00:04:06.228 Compiler for C supports arguments -Wold-style-definition: YES 00:04:06.228 Compiler for C supports arguments -Wpointer-arith: YES 00:04:06.228 Compiler for C supports arguments -Wsign-compare: YES 00:04:06.228 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:06.228 Compiler for C supports arguments -Wundef: YES 00:04:06.228 Compiler for C supports arguments -Wwrite-strings: YES 00:04:06.228 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:06.228 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:06.228 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:06.228 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:06.228 Program objdump found: YES (/usr/bin/objdump) 00:04:06.228 Compiler for C supports arguments -mavx512f: YES 00:04:06.228 Checking if "AVX512 checking" compiles: YES 00:04:06.228 Fetching value of define "__SSE4_2__" : 1 00:04:06.228 Fetching value of define "__AES__" : 1 00:04:06.228 Fetching value of define "__AVX__" : 1 00:04:06.228 Fetching value of define "__AVX2__" : 1 00:04:06.228 Fetching value of define "__AVX512BW__" : (undefined) 00:04:06.228 Fetching value of define "__AVX512CD__" : (undefined) 00:04:06.228 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:06.228 Fetching value of define "__AVX512F__" : (undefined) 00:04:06.228 Fetching value of define "__AVX512VL__" : (undefined) 00:04:06.228 Fetching value of define "__PCLMUL__" : 1 00:04:06.228 Fetching value of define "__RDRND__" : 1 00:04:06.228 Fetching value of define "__RDSEED__" : 1 00:04:06.228 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:06.228 Fetching value of define "__znver1__" : (undefined) 00:04:06.228 Fetching value of define "__znver2__" : (undefined) 00:04:06.228 Fetching value of define "__znver3__" : (undefined) 00:04:06.228 Fetching value of define "__znver4__" : (undefined) 00:04:06.228 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:06.228 Message: lib/log: Defining dependency "log" 00:04:06.228 Message: lib/kvargs: Defining dependency "kvargs" 00:04:06.228 Message: lib/telemetry: Defining dependency "telemetry" 00:04:06.228 Checking for function "getentropy" : NO 00:04:06.228 Message: lib/eal: Defining dependency "eal" 00:04:06.228 Message: lib/ring: Defining dependency "ring" 00:04:06.228 Message: lib/rcu: Defining dependency "rcu" 00:04:06.228 Message: lib/mempool: Defining dependency "mempool" 00:04:06.228 Message: lib/mbuf: Defining dependency "mbuf" 00:04:06.228 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:06.228 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:06.228 Compiler for C supports arguments -mpclmul: YES 00:04:06.228 Compiler for C supports arguments -maes: YES 00:04:06.228 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:06.228 Compiler for C supports arguments -mavx512bw: YES 00:04:06.228 Compiler for C supports arguments -mavx512dq: YES 00:04:06.228 Compiler for C supports arguments -mavx512vl: YES 00:04:06.228 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:06.228 Compiler for C supports arguments -mavx2: YES 00:04:06.228 Compiler for C supports arguments -mavx: YES 00:04:06.228 Message: lib/net: Defining dependency "net" 00:04:06.228 Message: lib/meter: Defining dependency "meter" 00:04:06.228 Message: lib/ethdev: Defining dependency "ethdev" 00:04:06.228 Message: lib/pci: Defining dependency "pci" 00:04:06.228 Message: lib/cmdline: Defining dependency "cmdline" 00:04:06.228 Message: lib/hash: Defining dependency "hash" 00:04:06.228 Message: lib/timer: Defining dependency "timer" 00:04:06.228 Message: lib/compressdev: Defining dependency "compressdev" 00:04:06.228 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:06.228 Message: lib/dmadev: Defining dependency "dmadev" 00:04:06.228 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:06.228 Message: lib/power: Defining dependency "power" 00:04:06.228 Message: lib/reorder: Defining dependency "reorder" 00:04:06.228 Message: lib/security: Defining dependency "security" 00:04:06.228 Has header "linux/userfaultfd.h" : YES 00:04:06.228 Has header "linux/vduse.h" : YES 00:04:06.228 Message: lib/vhost: Defining dependency "vhost" 00:04:06.228 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:06.228 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:06.228 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:06.228 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:06.228 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:06.228 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:06.228 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:06.228 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:06.228 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:06.228 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:06.228 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:06.228 Configuring doxy-api-html.conf using configuration 00:04:06.228 Configuring doxy-api-man.conf using configuration 00:04:06.228 Program mandb found: YES (/usr/bin/mandb) 00:04:06.228 Program sphinx-build found: NO 00:04:06.228 Configuring rte_build_config.h using configuration 00:04:06.228 Message: 00:04:06.228 ================= 00:04:06.228 Applications Enabled 00:04:06.228 ================= 00:04:06.228 00:04:06.228 apps: 00:04:06.228 00:04:06.228 00:04:06.228 Message: 00:04:06.228 ================= 00:04:06.228 Libraries Enabled 00:04:06.228 ================= 00:04:06.228 00:04:06.228 libs: 00:04:06.228 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:06.228 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:06.228 cryptodev, dmadev, power, reorder, security, vhost, 00:04:06.228 00:04:06.228 Message: 00:04:06.228 =============== 00:04:06.228 Drivers Enabled 00:04:06.228 =============== 00:04:06.228 00:04:06.228 common: 00:04:06.228 00:04:06.228 bus: 00:04:06.228 pci, vdev, 00:04:06.228 mempool: 00:04:06.228 ring, 00:04:06.228 dma: 00:04:06.228 00:04:06.228 net: 00:04:06.228 00:04:06.228 crypto: 00:04:06.228 00:04:06.228 compress: 00:04:06.228 00:04:06.228 vdpa: 00:04:06.228 00:04:06.228 00:04:06.228 Message: 00:04:06.228 ================= 00:04:06.228 Content Skipped 00:04:06.228 ================= 00:04:06.229 00:04:06.229 apps: 00:04:06.229 dumpcap: explicitly disabled via build config 00:04:06.229 graph: explicitly disabled via build config 00:04:06.229 pdump: explicitly disabled via build config 00:04:06.229 proc-info: explicitly disabled via build config 00:04:06.229 test-acl: explicitly disabled via build config 00:04:06.229 test-bbdev: explicitly disabled via build config 00:04:06.229 test-cmdline: explicitly disabled via build config 00:04:06.229 test-compress-perf: explicitly disabled via build config 00:04:06.229 test-crypto-perf: explicitly disabled via build config 00:04:06.229 test-dma-perf: explicitly disabled via build config 00:04:06.229 test-eventdev: explicitly disabled via build config 00:04:06.229 test-fib: explicitly disabled via build config 00:04:06.229 test-flow-perf: explicitly disabled via build config 00:04:06.229 test-gpudev: explicitly disabled via build config 00:04:06.229 test-mldev: explicitly disabled via build config 00:04:06.229 test-pipeline: explicitly disabled via build config 00:04:06.229 test-pmd: explicitly disabled via build config 00:04:06.229 test-regex: explicitly disabled via build config 00:04:06.229 test-sad: explicitly disabled via build config 00:04:06.229 test-security-perf: explicitly disabled via build config 00:04:06.229 00:04:06.229 libs: 00:04:06.229 argparse: explicitly disabled via build config 00:04:06.229 metrics: explicitly disabled via build config 00:04:06.229 acl: explicitly disabled via build config 00:04:06.229 bbdev: explicitly disabled via build config 00:04:06.229 bitratestats: explicitly disabled via build config 00:04:06.229 bpf: explicitly disabled via build config 00:04:06.229 cfgfile: explicitly disabled via build config 00:04:06.229 distributor: explicitly disabled via build config 00:04:06.229 efd: explicitly disabled via build config 00:04:06.229 eventdev: explicitly disabled via build config 00:04:06.229 dispatcher: explicitly disabled via build config 00:04:06.229 gpudev: explicitly disabled via build config 00:04:06.229 gro: explicitly disabled via build config 00:04:06.229 gso: explicitly disabled via build config 00:04:06.229 ip_frag: explicitly disabled via build config 00:04:06.229 jobstats: explicitly disabled via build config 00:04:06.229 latencystats: explicitly disabled via build config 00:04:06.229 lpm: explicitly disabled via build config 00:04:06.229 member: explicitly disabled via build config 00:04:06.229 pcapng: explicitly disabled via build config 00:04:06.229 rawdev: explicitly disabled via build config 00:04:06.229 regexdev: explicitly disabled via build config 00:04:06.229 mldev: explicitly disabled via build config 00:04:06.229 rib: explicitly disabled via build config 00:04:06.229 sched: explicitly disabled via build config 00:04:06.229 stack: explicitly disabled via build config 00:04:06.229 ipsec: explicitly disabled via build config 00:04:06.229 pdcp: explicitly disabled via build config 00:04:06.229 fib: explicitly disabled via build config 00:04:06.229 port: explicitly disabled via build config 00:04:06.229 pdump: explicitly disabled via build config 00:04:06.229 table: explicitly disabled via build config 00:04:06.229 pipeline: explicitly disabled via build config 00:04:06.229 graph: explicitly disabled via build config 00:04:06.229 node: explicitly disabled via build config 00:04:06.229 00:04:06.229 drivers: 00:04:06.229 common/cpt: not in enabled drivers build config 00:04:06.229 common/dpaax: not in enabled drivers build config 00:04:06.229 common/iavf: not in enabled drivers build config 00:04:06.229 common/idpf: not in enabled drivers build config 00:04:06.229 common/ionic: not in enabled drivers build config 00:04:06.229 common/mvep: not in enabled drivers build config 00:04:06.229 common/octeontx: not in enabled drivers build config 00:04:06.229 bus/auxiliary: not in enabled drivers build config 00:04:06.229 bus/cdx: not in enabled drivers build config 00:04:06.229 bus/dpaa: not in enabled drivers build config 00:04:06.229 bus/fslmc: not in enabled drivers build config 00:04:06.229 bus/ifpga: not in enabled drivers build config 00:04:06.229 bus/platform: not in enabled drivers build config 00:04:06.229 bus/uacce: not in enabled drivers build config 00:04:06.229 bus/vmbus: not in enabled drivers build config 00:04:06.229 common/cnxk: not in enabled drivers build config 00:04:06.229 common/mlx5: not in enabled drivers build config 00:04:06.229 common/nfp: not in enabled drivers build config 00:04:06.229 common/nitrox: not in enabled drivers build config 00:04:06.229 common/qat: not in enabled drivers build config 00:04:06.229 common/sfc_efx: not in enabled drivers build config 00:04:06.229 mempool/bucket: not in enabled drivers build config 00:04:06.229 mempool/cnxk: not in enabled drivers build config 00:04:06.229 mempool/dpaa: not in enabled drivers build config 00:04:06.229 mempool/dpaa2: not in enabled drivers build config 00:04:06.229 mempool/octeontx: not in enabled drivers build config 00:04:06.229 mempool/stack: not in enabled drivers build config 00:04:06.229 dma/cnxk: not in enabled drivers build config 00:04:06.229 dma/dpaa: not in enabled drivers build config 00:04:06.229 dma/dpaa2: not in enabled drivers build config 00:04:06.229 dma/hisilicon: not in enabled drivers build config 00:04:06.229 dma/idxd: not in enabled drivers build config 00:04:06.229 dma/ioat: not in enabled drivers build config 00:04:06.229 dma/skeleton: not in enabled drivers build config 00:04:06.229 net/af_packet: not in enabled drivers build config 00:04:06.229 net/af_xdp: not in enabled drivers build config 00:04:06.229 net/ark: not in enabled drivers build config 00:04:06.229 net/atlantic: not in enabled drivers build config 00:04:06.229 net/avp: not in enabled drivers build config 00:04:06.229 net/axgbe: not in enabled drivers build config 00:04:06.229 net/bnx2x: not in enabled drivers build config 00:04:06.229 net/bnxt: not in enabled drivers build config 00:04:06.229 net/bonding: not in enabled drivers build config 00:04:06.229 net/cnxk: not in enabled drivers build config 00:04:06.229 net/cpfl: not in enabled drivers build config 00:04:06.229 net/cxgbe: not in enabled drivers build config 00:04:06.229 net/dpaa: not in enabled drivers build config 00:04:06.229 net/dpaa2: not in enabled drivers build config 00:04:06.229 net/e1000: not in enabled drivers build config 00:04:06.229 net/ena: not in enabled drivers build config 00:04:06.229 net/enetc: not in enabled drivers build config 00:04:06.229 net/enetfec: not in enabled drivers build config 00:04:06.229 net/enic: not in enabled drivers build config 00:04:06.229 net/failsafe: not in enabled drivers build config 00:04:06.229 net/fm10k: not in enabled drivers build config 00:04:06.229 net/gve: not in enabled drivers build config 00:04:06.229 net/hinic: not in enabled drivers build config 00:04:06.229 net/hns3: not in enabled drivers build config 00:04:06.229 net/i40e: not in enabled drivers build config 00:04:06.229 net/iavf: not in enabled drivers build config 00:04:06.229 net/ice: not in enabled drivers build config 00:04:06.229 net/idpf: not in enabled drivers build config 00:04:06.229 net/igc: not in enabled drivers build config 00:04:06.229 net/ionic: not in enabled drivers build config 00:04:06.229 net/ipn3ke: not in enabled drivers build config 00:04:06.229 net/ixgbe: not in enabled drivers build config 00:04:06.229 net/mana: not in enabled drivers build config 00:04:06.229 net/memif: not in enabled drivers build config 00:04:06.229 net/mlx4: not in enabled drivers build config 00:04:06.229 net/mlx5: not in enabled drivers build config 00:04:06.229 net/mvneta: not in enabled drivers build config 00:04:06.229 net/mvpp2: not in enabled drivers build config 00:04:06.229 net/netvsc: not in enabled drivers build config 00:04:06.229 net/nfb: not in enabled drivers build config 00:04:06.229 net/nfp: not in enabled drivers build config 00:04:06.229 net/ngbe: not in enabled drivers build config 00:04:06.229 net/null: not in enabled drivers build config 00:04:06.229 net/octeontx: not in enabled drivers build config 00:04:06.229 net/octeon_ep: not in enabled drivers build config 00:04:06.229 net/pcap: not in enabled drivers build config 00:04:06.229 net/pfe: not in enabled drivers build config 00:04:06.229 net/qede: not in enabled drivers build config 00:04:06.229 net/ring: not in enabled drivers build config 00:04:06.229 net/sfc: not in enabled drivers build config 00:04:06.229 net/softnic: not in enabled drivers build config 00:04:06.230 net/tap: not in enabled drivers build config 00:04:06.230 net/thunderx: not in enabled drivers build config 00:04:06.230 net/txgbe: not in enabled drivers build config 00:04:06.230 net/vdev_netvsc: not in enabled drivers build config 00:04:06.230 net/vhost: not in enabled drivers build config 00:04:06.230 net/virtio: not in enabled drivers build config 00:04:06.230 net/vmxnet3: not in enabled drivers build config 00:04:06.230 raw/*: missing internal dependency, "rawdev" 00:04:06.230 crypto/armv8: not in enabled drivers build config 00:04:06.230 crypto/bcmfs: not in enabled drivers build config 00:04:06.230 crypto/caam_jr: not in enabled drivers build config 00:04:06.230 crypto/ccp: not in enabled drivers build config 00:04:06.230 crypto/cnxk: not in enabled drivers build config 00:04:06.230 crypto/dpaa_sec: not in enabled drivers build config 00:04:06.230 crypto/dpaa2_sec: not in enabled drivers build config 00:04:06.230 crypto/ipsec_mb: not in enabled drivers build config 00:04:06.230 crypto/mlx5: not in enabled drivers build config 00:04:06.230 crypto/mvsam: not in enabled drivers build config 00:04:06.230 crypto/nitrox: not in enabled drivers build config 00:04:06.230 crypto/null: not in enabled drivers build config 00:04:06.230 crypto/octeontx: not in enabled drivers build config 00:04:06.230 crypto/openssl: not in enabled drivers build config 00:04:06.230 crypto/scheduler: not in enabled drivers build config 00:04:06.230 crypto/uadk: not in enabled drivers build config 00:04:06.230 crypto/virtio: not in enabled drivers build config 00:04:06.230 compress/isal: not in enabled drivers build config 00:04:06.230 compress/mlx5: not in enabled drivers build config 00:04:06.230 compress/nitrox: not in enabled drivers build config 00:04:06.230 compress/octeontx: not in enabled drivers build config 00:04:06.230 compress/zlib: not in enabled drivers build config 00:04:06.230 regex/*: missing internal dependency, "regexdev" 00:04:06.230 ml/*: missing internal dependency, "mldev" 00:04:06.230 vdpa/ifc: not in enabled drivers build config 00:04:06.230 vdpa/mlx5: not in enabled drivers build config 00:04:06.230 vdpa/nfp: not in enabled drivers build config 00:04:06.230 vdpa/sfc: not in enabled drivers build config 00:04:06.230 event/*: missing internal dependency, "eventdev" 00:04:06.230 baseband/*: missing internal dependency, "bbdev" 00:04:06.230 gpu/*: missing internal dependency, "gpudev" 00:04:06.230 00:04:06.230 00:04:06.230 Build targets in project: 85 00:04:06.230 00:04:06.230 DPDK 24.03.0 00:04:06.230 00:04:06.230 User defined options 00:04:06.230 buildtype : debug 00:04:06.230 default_library : shared 00:04:06.230 libdir : lib 00:04:06.230 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:06.230 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:06.230 c_link_args : 00:04:06.230 cpu_instruction_set: native 00:04:06.230 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:06.230 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:06.230 enable_docs : false 00:04:06.230 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:06.230 enable_kmods : false 00:04:06.230 max_lcores : 128 00:04:06.230 tests : false 00:04:06.230 00:04:06.230 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:06.489 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:06.489 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:06.489 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:06.489 [3/268] Linking static target lib/librte_kvargs.a 00:04:06.489 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:06.748 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:06.748 [6/268] Linking static target lib/librte_log.a 00:04:07.007 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.266 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:07.266 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:07.266 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:07.525 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:07.525 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:07.525 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:07.525 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:07.525 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:07.525 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:07.525 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:07.525 [18/268] Linking static target lib/librte_telemetry.a 00:04:07.784 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.784 [20/268] Linking target lib/librte_log.so.24.1 00:04:08.044 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:08.044 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:08.044 [23/268] Linking target lib/librte_kvargs.so.24.1 00:04:08.303 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:08.303 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:08.303 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:08.303 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:08.561 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:08.561 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:08.561 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:08.561 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:08.561 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:08.561 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.561 [34/268] Linking target lib/librte_telemetry.so.24.1 00:04:08.819 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:08.819 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:09.077 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:09.336 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:09.336 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:09.336 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:09.336 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:09.336 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:09.336 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:09.336 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:09.595 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:09.595 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:09.595 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:09.595 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:09.595 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:09.853 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:09.853 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:10.420 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:10.420 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:10.420 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:10.420 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:10.420 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:10.420 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:10.679 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:10.679 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:10.679 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:10.679 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:10.938 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:11.197 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:11.197 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:11.197 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:11.197 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:11.458 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:11.716 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:11.716 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:11.716 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:11.716 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:11.716 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:11.975 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:11.975 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:11.975 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:11.975 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:11.975 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:12.233 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:12.492 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:12.492 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:12.750 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:12.750 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:12.750 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:12.750 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:12.750 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:12.750 [86/268] Linking static target lib/librte_ring.a 00:04:13.008 [87/268] Linking static target lib/librte_eal.a 00:04:13.008 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:13.008 [89/268] Linking static target lib/librte_rcu.a 00:04:13.008 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:13.008 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:13.267 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:13.267 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:13.267 [94/268] Linking static target lib/librte_mempool.a 00:04:13.267 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:13.525 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.525 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:13.525 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:13.525 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.525 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:13.784 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:13.784 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:13.784 [103/268] Linking static target lib/librte_mbuf.a 00:04:13.784 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:14.043 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:14.043 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:14.043 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:14.343 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:14.343 [109/268] Linking static target lib/librte_meter.a 00:04:14.343 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:14.343 [111/268] Linking static target lib/librte_net.a 00:04:14.602 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.602 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.861 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:14.861 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.861 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:14.861 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.861 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:15.119 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:15.378 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:15.378 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:15.636 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:15.894 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:15.894 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:15.894 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:15.894 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:15.894 [127/268] Linking static target lib/librte_pci.a 00:04:15.894 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:16.152 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:16.152 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:16.152 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:16.152 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:16.152 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:16.411 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:16.411 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:16.411 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:16.411 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.411 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:16.411 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:16.411 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:16.411 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:16.411 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:16.411 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:16.411 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:16.411 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:16.411 [146/268] Linking static target lib/librte_ethdev.a 00:04:16.411 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:16.977 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:16.977 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:17.236 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:17.236 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:17.236 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:17.236 [153/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:17.236 [154/268] Linking static target lib/librte_cmdline.a 00:04:17.494 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:17.494 [156/268] Linking static target lib/librte_timer.a 00:04:17.494 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:17.494 [158/268] Linking static target lib/librte_hash.a 00:04:17.753 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:17.753 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:17.753 [161/268] Linking static target lib/librte_compressdev.a 00:04:18.012 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:18.012 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:18.012 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:18.270 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.528 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:18.528 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:18.528 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:18.528 [169/268] Linking static target lib/librte_dmadev.a 00:04:18.787 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:18.787 [171/268] Linking static target lib/librte_cryptodev.a 00:04:18.787 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:18.787 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:18.787 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.046 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:19.046 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.046 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.046 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:19.611 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:19.611 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:19.611 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.611 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:19.611 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:19.871 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:19.871 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:19.871 [186/268] Linking static target lib/librte_reorder.a 00:04:19.871 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:19.871 [188/268] Linking static target lib/librte_power.a 00:04:20.439 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:20.439 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:20.439 [191/268] Linking static target lib/librte_security.a 00:04:20.439 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:20.439 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:20.698 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.956 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:21.215 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.215 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:21.215 [198/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.215 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.473 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:21.473 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:21.732 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:21.991 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:21.991 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:21.991 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:21.991 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:22.250 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:22.250 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:22.250 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:22.250 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:22.250 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:22.250 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:22.510 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:22.510 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:22.510 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:22.510 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:22.510 [217/268] Linking static target drivers/librte_bus_pci.a 00:04:22.510 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:22.510 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:22.510 [220/268] Linking static target drivers/librte_bus_vdev.a 00:04:22.510 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:22.510 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:22.769 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:22.769 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:22.769 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:22.769 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:22.769 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.028 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.596 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:23.596 [230/268] Linking static target lib/librte_vhost.a 00:04:24.532 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.532 [232/268] Linking target lib/librte_eal.so.24.1 00:04:24.791 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:24.791 [234/268] Linking target lib/librte_meter.so.24.1 00:04:24.791 [235/268] Linking target lib/librte_pci.so.24.1 00:04:24.791 [236/268] Linking target lib/librte_ring.so.24.1 00:04:24.791 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:24.791 [238/268] Linking target lib/librte_timer.so.24.1 00:04:24.791 [239/268] Linking target lib/librte_dmadev.so.24.1 00:04:24.791 [240/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.791 [241/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.791 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:24.791 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:24.791 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:24.791 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:24.791 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:24.791 [247/268] Linking target lib/librte_rcu.so.24.1 00:04:24.791 [248/268] Linking target lib/librte_mempool.so.24.1 00:04:24.791 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:25.050 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:25.050 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:25.050 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:25.050 [253/268] Linking target lib/librte_mbuf.so.24.1 00:04:25.309 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:25.309 [255/268] Linking target lib/librte_reorder.so.24.1 00:04:25.309 [256/268] Linking target lib/librte_net.so.24.1 00:04:25.309 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:04:25.309 [258/268] Linking target lib/librte_compressdev.so.24.1 00:04:25.309 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:25.309 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:25.568 [261/268] Linking target lib/librte_hash.so.24.1 00:04:25.568 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:25.568 [263/268] Linking target lib/librte_security.so.24.1 00:04:25.568 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:25.568 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:25.568 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:25.568 [267/268] Linking target lib/librte_power.so.24.1 00:04:25.568 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:25.826 INFO: autodetecting backend as ninja 00:04:25.826 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:57.921 CC lib/ut/ut.o 00:04:57.921 CC lib/log/log.o 00:04:57.921 CC lib/log/log_flags.o 00:04:57.921 CC lib/log/log_deprecated.o 00:04:57.921 CC lib/ut_mock/mock.o 00:04:57.921 LIB libspdk_ut_mock.a 00:04:57.921 LIB libspdk_ut.a 00:04:57.921 LIB libspdk_log.a 00:04:57.921 SO libspdk_ut.so.2.0 00:04:57.921 SO libspdk_ut_mock.so.6.0 00:04:57.921 SO libspdk_log.so.7.1 00:04:57.921 SYMLINK libspdk_ut_mock.so 00:04:57.921 SYMLINK libspdk_ut.so 00:04:57.921 SYMLINK libspdk_log.so 00:04:57.921 CC lib/dma/dma.o 00:04:57.921 CC lib/ioat/ioat.o 00:04:57.921 CXX lib/trace_parser/trace.o 00:04:57.921 CC lib/util/base64.o 00:04:57.921 CC lib/util/bit_array.o 00:04:57.921 CC lib/util/cpuset.o 00:04:57.921 CC lib/util/crc16.o 00:04:57.921 CC lib/util/crc32c.o 00:04:57.921 CC lib/util/crc32.o 00:04:57.921 CC lib/vfio_user/host/vfio_user_pci.o 00:04:57.921 CC lib/util/crc32_ieee.o 00:04:57.921 CC lib/util/crc64.o 00:04:57.921 CC lib/util/dif.o 00:04:57.921 CC lib/vfio_user/host/vfio_user.o 00:04:57.921 CC lib/util/fd.o 00:04:57.921 LIB libspdk_dma.a 00:04:57.921 LIB libspdk_ioat.a 00:04:57.921 CC lib/util/fd_group.o 00:04:57.921 SO libspdk_dma.so.5.0 00:04:57.921 SO libspdk_ioat.so.7.0 00:04:57.921 SYMLINK libspdk_dma.so 00:04:57.921 CC lib/util/file.o 00:04:57.921 CC lib/util/hexlify.o 00:04:57.921 SYMLINK libspdk_ioat.so 00:04:57.921 CC lib/util/iov.o 00:04:57.921 CC lib/util/math.o 00:04:57.921 CC lib/util/net.o 00:04:57.922 CC lib/util/pipe.o 00:04:57.922 LIB libspdk_vfio_user.a 00:04:57.922 SO libspdk_vfio_user.so.5.0 00:04:57.922 CC lib/util/strerror_tls.o 00:04:57.922 CC lib/util/string.o 00:04:57.922 CC lib/util/uuid.o 00:04:57.922 SYMLINK libspdk_vfio_user.so 00:04:57.922 CC lib/util/xor.o 00:04:57.922 CC lib/util/zipf.o 00:04:57.922 CC lib/util/md5.o 00:04:57.922 LIB libspdk_util.a 00:04:57.922 SO libspdk_util.so.10.1 00:04:57.922 LIB libspdk_trace_parser.a 00:04:57.922 SYMLINK libspdk_util.so 00:04:57.922 SO libspdk_trace_parser.so.6.0 00:04:57.922 SYMLINK libspdk_trace_parser.so 00:04:57.922 CC lib/rdma_utils/rdma_utils.o 00:04:57.922 CC lib/json/json_parse.o 00:04:57.922 CC lib/idxd/idxd_user.o 00:04:57.922 CC lib/idxd/idxd.o 00:04:57.922 CC lib/conf/conf.o 00:04:57.922 CC lib/json/json_util.o 00:04:57.922 CC lib/vmd/led.o 00:04:57.922 CC lib/env_dpdk/env.o 00:04:57.922 CC lib/vmd/vmd.o 00:04:57.922 CC lib/idxd/idxd_kernel.o 00:04:57.922 CC lib/env_dpdk/memory.o 00:04:57.922 CC lib/json/json_write.o 00:04:57.922 LIB libspdk_conf.a 00:04:57.922 CC lib/env_dpdk/pci.o 00:04:57.922 SO libspdk_conf.so.6.0 00:04:57.922 CC lib/env_dpdk/init.o 00:04:57.922 SYMLINK libspdk_conf.so 00:04:57.922 CC lib/env_dpdk/threads.o 00:04:57.922 LIB libspdk_rdma_utils.a 00:04:57.922 CC lib/env_dpdk/pci_ioat.o 00:04:57.922 SO libspdk_rdma_utils.so.1.0 00:04:57.922 LIB libspdk_json.a 00:04:57.922 SYMLINK libspdk_rdma_utils.so 00:04:57.922 CC lib/env_dpdk/pci_virtio.o 00:04:57.922 CC lib/env_dpdk/pci_vmd.o 00:04:57.922 SO libspdk_json.so.6.0 00:04:57.922 CC lib/env_dpdk/pci_idxd.o 00:04:57.922 LIB libspdk_idxd.a 00:04:57.922 SYMLINK libspdk_json.so 00:04:57.922 CC lib/env_dpdk/pci_event.o 00:04:57.922 CC lib/env_dpdk/sigbus_handler.o 00:04:57.922 SO libspdk_idxd.so.12.1 00:04:57.922 CC lib/env_dpdk/pci_dpdk.o 00:04:57.922 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:57.922 SYMLINK libspdk_idxd.so 00:04:57.922 LIB libspdk_vmd.a 00:04:57.922 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:57.922 SO libspdk_vmd.so.6.0 00:04:57.922 SYMLINK libspdk_vmd.so 00:04:57.922 CC lib/jsonrpc/jsonrpc_server.o 00:04:57.922 CC lib/jsonrpc/jsonrpc_client.o 00:04:57.922 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:57.922 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:57.922 CC lib/rdma_provider/common.o 00:04:57.922 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:57.922 LIB libspdk_rdma_provider.a 00:04:57.922 LIB libspdk_jsonrpc.a 00:04:57.922 SO libspdk_rdma_provider.so.7.0 00:04:57.922 SO libspdk_jsonrpc.so.6.0 00:04:57.922 SYMLINK libspdk_rdma_provider.so 00:04:57.922 SYMLINK libspdk_jsonrpc.so 00:04:57.922 LIB libspdk_env_dpdk.a 00:04:57.922 CC lib/rpc/rpc.o 00:04:57.922 SO libspdk_env_dpdk.so.15.1 00:04:57.922 SYMLINK libspdk_env_dpdk.so 00:04:57.922 LIB libspdk_rpc.a 00:04:57.922 SO libspdk_rpc.so.6.0 00:04:57.922 SYMLINK libspdk_rpc.so 00:04:57.922 CC lib/keyring/keyring_rpc.o 00:04:57.922 CC lib/keyring/keyring.o 00:04:57.922 CC lib/trace/trace_flags.o 00:04:57.922 CC lib/trace/trace.o 00:04:57.922 CC lib/trace/trace_rpc.o 00:04:57.922 CC lib/notify/notify.o 00:04:57.922 CC lib/notify/notify_rpc.o 00:04:57.922 LIB libspdk_notify.a 00:04:57.922 LIB libspdk_keyring.a 00:04:57.922 SO libspdk_notify.so.6.0 00:04:57.922 LIB libspdk_trace.a 00:04:57.922 SO libspdk_keyring.so.2.0 00:04:57.922 SO libspdk_trace.so.11.0 00:04:57.922 SYMLINK libspdk_notify.so 00:04:57.922 SYMLINK libspdk_keyring.so 00:04:57.922 SYMLINK libspdk_trace.so 00:04:57.922 CC lib/thread/iobuf.o 00:04:57.922 CC lib/thread/thread.o 00:04:57.922 CC lib/sock/sock.o 00:04:57.922 CC lib/sock/sock_rpc.o 00:04:58.181 LIB libspdk_sock.a 00:04:58.181 SO libspdk_sock.so.10.0 00:04:58.181 SYMLINK libspdk_sock.so 00:04:58.443 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:58.443 CC lib/nvme/nvme_ctrlr.o 00:04:58.443 CC lib/nvme/nvme_fabric.o 00:04:58.443 CC lib/nvme/nvme_ns_cmd.o 00:04:58.443 CC lib/nvme/nvme_pcie_common.o 00:04:58.443 CC lib/nvme/nvme_qpair.o 00:04:58.443 CC lib/nvme/nvme_pcie.o 00:04:58.443 CC lib/nvme/nvme.o 00:04:58.443 CC lib/nvme/nvme_ns.o 00:04:59.407 LIB libspdk_thread.a 00:04:59.407 CC lib/nvme/nvme_quirks.o 00:04:59.407 SO libspdk_thread.so.11.0 00:04:59.407 CC lib/nvme/nvme_transport.o 00:04:59.407 SYMLINK libspdk_thread.so 00:04:59.407 CC lib/nvme/nvme_discovery.o 00:04:59.407 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:59.666 CC lib/accel/accel.o 00:04:59.666 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:59.666 CC lib/blob/blobstore.o 00:04:59.666 CC lib/init/json_config.o 00:04:59.666 CC lib/virtio/virtio.o 00:04:59.925 CC lib/virtio/virtio_vhost_user.o 00:04:59.925 CC lib/init/subsystem.o 00:04:59.925 CC lib/init/subsystem_rpc.o 00:05:00.184 CC lib/init/rpc.o 00:05:00.184 CC lib/virtio/virtio_vfio_user.o 00:05:00.184 CC lib/virtio/virtio_pci.o 00:05:00.184 CC lib/blob/request.o 00:05:00.184 CC lib/accel/accel_rpc.o 00:05:00.184 CC lib/accel/accel_sw.o 00:05:00.184 CC lib/fsdev/fsdev.o 00:05:00.184 LIB libspdk_init.a 00:05:00.443 SO libspdk_init.so.6.0 00:05:00.443 CC lib/fsdev/fsdev_io.o 00:05:00.443 SYMLINK libspdk_init.so 00:05:00.443 CC lib/blob/zeroes.o 00:05:00.443 LIB libspdk_virtio.a 00:05:00.443 SO libspdk_virtio.so.7.0 00:05:00.443 CC lib/nvme/nvme_tcp.o 00:05:00.702 CC lib/nvme/nvme_opal.o 00:05:00.702 SYMLINK libspdk_virtio.so 00:05:00.702 CC lib/nvme/nvme_io_msg.o 00:05:00.702 CC lib/nvme/nvme_poll_group.o 00:05:00.702 CC lib/event/app.o 00:05:00.702 CC lib/nvme/nvme_zns.o 00:05:00.702 LIB libspdk_accel.a 00:05:00.702 CC lib/nvme/nvme_stubs.o 00:05:00.702 SO libspdk_accel.so.16.0 00:05:00.961 SYMLINK libspdk_accel.so 00:05:00.961 CC lib/nvme/nvme_auth.o 00:05:00.961 CC lib/fsdev/fsdev_rpc.o 00:05:00.961 CC lib/event/reactor.o 00:05:00.961 LIB libspdk_fsdev.a 00:05:01.220 SO libspdk_fsdev.so.2.0 00:05:01.220 CC lib/nvme/nvme_cuse.o 00:05:01.220 SYMLINK libspdk_fsdev.so 00:05:01.220 CC lib/blob/blob_bs_dev.o 00:05:01.220 CC lib/nvme/nvme_rdma.o 00:05:01.220 CC lib/event/log_rpc.o 00:05:01.479 CC lib/event/app_rpc.o 00:05:01.479 CC lib/event/scheduler_static.o 00:05:01.479 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:01.479 CC lib/bdev/bdev_rpc.o 00:05:01.479 CC lib/bdev/bdev.o 00:05:01.737 CC lib/bdev/bdev_zone.o 00:05:01.737 LIB libspdk_event.a 00:05:01.738 SO libspdk_event.so.14.0 00:05:01.738 SYMLINK libspdk_event.so 00:05:01.738 CC lib/bdev/part.o 00:05:01.738 CC lib/bdev/scsi_nvme.o 00:05:02.305 LIB libspdk_fuse_dispatcher.a 00:05:02.305 SO libspdk_fuse_dispatcher.so.1.0 00:05:02.305 SYMLINK libspdk_fuse_dispatcher.so 00:05:02.563 LIB libspdk_nvme.a 00:05:02.821 LIB libspdk_blob.a 00:05:02.821 SO libspdk_blob.so.11.0 00:05:02.821 SO libspdk_nvme.so.15.0 00:05:02.821 SYMLINK libspdk_blob.so 00:05:03.078 CC lib/lvol/lvol.o 00:05:03.078 CC lib/blobfs/tree.o 00:05:03.078 CC lib/blobfs/blobfs.o 00:05:03.078 SYMLINK libspdk_nvme.so 00:05:04.011 LIB libspdk_blobfs.a 00:05:04.011 SO libspdk_blobfs.so.10.0 00:05:04.011 SYMLINK libspdk_blobfs.so 00:05:04.011 LIB libspdk_lvol.a 00:05:04.011 SO libspdk_lvol.so.10.0 00:05:04.269 SYMLINK libspdk_lvol.so 00:05:04.269 LIB libspdk_bdev.a 00:05:04.269 SO libspdk_bdev.so.17.0 00:05:04.527 SYMLINK libspdk_bdev.so 00:05:04.785 CC lib/nvmf/ctrlr.o 00:05:04.785 CC lib/nvmf/ctrlr_discovery.o 00:05:04.785 CC lib/nbd/nbd.o 00:05:04.785 CC lib/nvmf/ctrlr_bdev.o 00:05:04.785 CC lib/nbd/nbd_rpc.o 00:05:04.785 CC lib/nvmf/subsystem.o 00:05:04.785 CC lib/nvmf/nvmf.o 00:05:04.785 CC lib/ftl/ftl_core.o 00:05:04.785 CC lib/scsi/dev.o 00:05:04.785 CC lib/ublk/ublk.o 00:05:04.785 CC lib/ftl/ftl_init.o 00:05:05.089 CC lib/scsi/lun.o 00:05:05.089 CC lib/ftl/ftl_layout.o 00:05:05.089 CC lib/ftl/ftl_debug.o 00:05:05.089 LIB libspdk_nbd.a 00:05:05.089 SO libspdk_nbd.so.7.0 00:05:05.089 CC lib/nvmf/nvmf_rpc.o 00:05:05.347 SYMLINK libspdk_nbd.so 00:05:05.347 CC lib/nvmf/transport.o 00:05:05.347 CC lib/scsi/port.o 00:05:05.347 CC lib/ftl/ftl_io.o 00:05:05.347 CC lib/ftl/ftl_sb.o 00:05:05.347 CC lib/ublk/ublk_rpc.o 00:05:05.604 CC lib/ftl/ftl_l2p.o 00:05:05.604 CC lib/scsi/scsi.o 00:05:05.604 LIB libspdk_ublk.a 00:05:05.604 CC lib/ftl/ftl_l2p_flat.o 00:05:05.604 SO libspdk_ublk.so.3.0 00:05:05.604 CC lib/nvmf/tcp.o 00:05:05.604 CC lib/nvmf/stubs.o 00:05:05.604 CC lib/scsi/scsi_bdev.o 00:05:05.604 SYMLINK libspdk_ublk.so 00:05:05.604 CC lib/ftl/ftl_nv_cache.o 00:05:05.604 CC lib/ftl/ftl_band.o 00:05:05.862 CC lib/ftl/ftl_band_ops.o 00:05:05.862 CC lib/ftl/ftl_writer.o 00:05:06.120 CC lib/ftl/ftl_rq.o 00:05:06.120 CC lib/nvmf/mdns_server.o 00:05:06.120 CC lib/nvmf/rdma.o 00:05:06.120 CC lib/nvmf/auth.o 00:05:06.120 CC lib/ftl/ftl_reloc.o 00:05:06.120 CC lib/ftl/ftl_l2p_cache.o 00:05:06.120 CC lib/scsi/scsi_pr.o 00:05:06.120 CC lib/ftl/ftl_p2l.o 00:05:06.453 CC lib/ftl/ftl_p2l_log.o 00:05:06.453 CC lib/scsi/scsi_rpc.o 00:05:06.711 CC lib/ftl/mngt/ftl_mngt.o 00:05:06.711 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:06.711 CC lib/scsi/task.o 00:05:06.711 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:06.711 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:06.711 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:06.711 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:06.970 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:06.970 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:06.970 LIB libspdk_scsi.a 00:05:06.970 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:06.970 SO libspdk_scsi.so.9.0 00:05:06.970 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:06.970 SYMLINK libspdk_scsi.so 00:05:06.970 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:06.970 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:06.970 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:07.228 CC lib/ftl/utils/ftl_conf.o 00:05:07.228 CC lib/ftl/utils/ftl_md.o 00:05:07.228 CC lib/iscsi/conn.o 00:05:07.228 CC lib/ftl/utils/ftl_mempool.o 00:05:07.228 CC lib/iscsi/init_grp.o 00:05:07.228 CC lib/ftl/utils/ftl_bitmap.o 00:05:07.228 CC lib/ftl/utils/ftl_property.o 00:05:07.228 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:07.484 CC lib/vhost/vhost.o 00:05:07.484 CC lib/vhost/vhost_rpc.o 00:05:07.484 CC lib/vhost/vhost_scsi.o 00:05:07.484 CC lib/vhost/vhost_blk.o 00:05:07.484 CC lib/vhost/rte_vhost_user.o 00:05:07.484 CC lib/iscsi/iscsi.o 00:05:07.484 CC lib/iscsi/param.o 00:05:07.484 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:07.742 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:07.999 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:07.999 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:07.999 CC lib/iscsi/portal_grp.o 00:05:07.999 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:08.257 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:08.257 LIB libspdk_nvmf.a 00:05:08.257 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:08.257 SO libspdk_nvmf.so.20.0 00:05:08.257 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:08.257 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:08.516 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:08.516 CC lib/iscsi/tgt_node.o 00:05:08.516 CC lib/iscsi/iscsi_subsystem.o 00:05:08.516 SYMLINK libspdk_nvmf.so 00:05:08.516 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:08.516 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:08.516 CC lib/ftl/base/ftl_base_dev.o 00:05:08.516 CC lib/ftl/base/ftl_base_bdev.o 00:05:08.775 CC lib/iscsi/iscsi_rpc.o 00:05:08.775 LIB libspdk_vhost.a 00:05:08.775 CC lib/iscsi/task.o 00:05:08.775 SO libspdk_vhost.so.8.0 00:05:08.775 CC lib/ftl/ftl_trace.o 00:05:08.775 SYMLINK libspdk_vhost.so 00:05:09.032 LIB libspdk_ftl.a 00:05:09.032 LIB libspdk_iscsi.a 00:05:09.289 SO libspdk_iscsi.so.8.0 00:05:09.289 SO libspdk_ftl.so.9.0 00:05:09.289 SYMLINK libspdk_iscsi.so 00:05:09.545 SYMLINK libspdk_ftl.so 00:05:09.803 CC module/env_dpdk/env_dpdk_rpc.o 00:05:10.060 CC module/accel/error/accel_error.o 00:05:10.060 CC module/accel/ioat/accel_ioat.o 00:05:10.060 CC module/sock/posix/posix.o 00:05:10.060 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:10.060 CC module/accel/dsa/accel_dsa.o 00:05:10.060 CC module/blob/bdev/blob_bdev.o 00:05:10.060 CC module/sock/uring/uring.o 00:05:10.060 CC module/keyring/file/keyring.o 00:05:10.060 CC module/fsdev/aio/fsdev_aio.o 00:05:10.060 LIB libspdk_env_dpdk_rpc.a 00:05:10.060 SO libspdk_env_dpdk_rpc.so.6.0 00:05:10.060 SYMLINK libspdk_env_dpdk_rpc.so 00:05:10.318 CC module/accel/dsa/accel_dsa_rpc.o 00:05:10.318 CC module/keyring/file/keyring_rpc.o 00:05:10.318 LIB libspdk_scheduler_dynamic.a 00:05:10.318 CC module/accel/error/accel_error_rpc.o 00:05:10.318 CC module/accel/ioat/accel_ioat_rpc.o 00:05:10.318 SO libspdk_scheduler_dynamic.so.4.0 00:05:10.318 LIB libspdk_blob_bdev.a 00:05:10.318 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:10.318 SYMLINK libspdk_scheduler_dynamic.so 00:05:10.318 LIB libspdk_accel_dsa.a 00:05:10.318 LIB libspdk_keyring_file.a 00:05:10.318 SO libspdk_blob_bdev.so.11.0 00:05:10.318 SO libspdk_accel_dsa.so.5.0 00:05:10.318 SO libspdk_keyring_file.so.2.0 00:05:10.318 LIB libspdk_accel_error.a 00:05:10.318 SYMLINK libspdk_blob_bdev.so 00:05:10.318 LIB libspdk_accel_ioat.a 00:05:10.318 SYMLINK libspdk_accel_dsa.so 00:05:10.577 SYMLINK libspdk_keyring_file.so 00:05:10.577 SO libspdk_accel_error.so.2.0 00:05:10.577 SO libspdk_accel_ioat.so.6.0 00:05:10.577 SYMLINK libspdk_accel_error.so 00:05:10.577 CC module/fsdev/aio/linux_aio_mgr.o 00:05:10.577 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:10.577 SYMLINK libspdk_accel_ioat.so 00:05:10.577 CC module/accel/iaa/accel_iaa.o 00:05:10.577 CC module/keyring/linux/keyring.o 00:05:10.836 LIB libspdk_scheduler_dpdk_governor.a 00:05:10.836 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:10.836 LIB libspdk_sock_posix.a 00:05:10.836 CC module/scheduler/gscheduler/gscheduler.o 00:05:10.836 CC module/bdev/delay/vbdev_delay.o 00:05:10.836 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:10.836 LIB libspdk_fsdev_aio.a 00:05:10.836 LIB libspdk_sock_uring.a 00:05:10.836 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:10.836 CC module/accel/iaa/accel_iaa_rpc.o 00:05:10.836 SO libspdk_sock_posix.so.6.0 00:05:10.836 SO libspdk_sock_uring.so.5.0 00:05:10.836 CC module/blobfs/bdev/blobfs_bdev.o 00:05:10.836 SO libspdk_fsdev_aio.so.1.0 00:05:10.836 CC module/keyring/linux/keyring_rpc.o 00:05:10.836 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:10.836 SYMLINK libspdk_sock_posix.so 00:05:10.836 SYMLINK libspdk_sock_uring.so 00:05:10.836 SYMLINK libspdk_fsdev_aio.so 00:05:11.095 LIB libspdk_scheduler_gscheduler.a 00:05:11.095 LIB libspdk_accel_iaa.a 00:05:11.095 LIB libspdk_keyring_linux.a 00:05:11.095 SO libspdk_scheduler_gscheduler.so.4.0 00:05:11.095 SO libspdk_accel_iaa.so.3.0 00:05:11.095 SO libspdk_keyring_linux.so.1.0 00:05:11.095 SYMLINK libspdk_scheduler_gscheduler.so 00:05:11.095 SYMLINK libspdk_accel_iaa.so 00:05:11.095 SYMLINK libspdk_keyring_linux.so 00:05:11.095 CC module/bdev/error/vbdev_error.o 00:05:11.095 LIB libspdk_blobfs_bdev.a 00:05:11.095 CC module/bdev/gpt/gpt.o 00:05:11.095 CC module/bdev/lvol/vbdev_lvol.o 00:05:11.095 SO libspdk_blobfs_bdev.so.6.0 00:05:11.095 CC module/bdev/malloc/bdev_malloc.o 00:05:11.095 LIB libspdk_bdev_delay.a 00:05:11.095 SYMLINK libspdk_blobfs_bdev.so 00:05:11.095 SO libspdk_bdev_delay.so.6.0 00:05:11.445 CC module/bdev/null/bdev_null.o 00:05:11.445 CC module/bdev/passthru/vbdev_passthru.o 00:05:11.445 CC module/bdev/raid/bdev_raid.o 00:05:11.445 CC module/bdev/nvme/bdev_nvme.o 00:05:11.445 SYMLINK libspdk_bdev_delay.so 00:05:11.445 CC module/bdev/raid/bdev_raid_rpc.o 00:05:11.445 CC module/bdev/gpt/vbdev_gpt.o 00:05:11.445 CC module/bdev/split/vbdev_split.o 00:05:11.445 CC module/bdev/error/vbdev_error_rpc.o 00:05:11.446 CC module/bdev/null/bdev_null_rpc.o 00:05:11.446 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:11.704 CC module/bdev/split/vbdev_split_rpc.o 00:05:11.704 LIB libspdk_bdev_error.a 00:05:11.704 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:11.704 SO libspdk_bdev_error.so.6.0 00:05:11.704 LIB libspdk_bdev_gpt.a 00:05:11.704 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:11.704 SO libspdk_bdev_gpt.so.6.0 00:05:11.704 SYMLINK libspdk_bdev_error.so 00:05:11.704 CC module/bdev/raid/bdev_raid_sb.o 00:05:11.704 CC module/bdev/raid/raid0.o 00:05:11.704 SYMLINK libspdk_bdev_gpt.so 00:05:11.704 CC module/bdev/raid/raid1.o 00:05:11.704 LIB libspdk_bdev_malloc.a 00:05:11.704 LIB libspdk_bdev_null.a 00:05:11.704 LIB libspdk_bdev_split.a 00:05:11.704 SO libspdk_bdev_malloc.so.6.0 00:05:11.704 SO libspdk_bdev_null.so.6.0 00:05:11.704 SO libspdk_bdev_split.so.6.0 00:05:11.704 LIB libspdk_bdev_passthru.a 00:05:11.704 SO libspdk_bdev_passthru.so.6.0 00:05:11.963 SYMLINK libspdk_bdev_malloc.so 00:05:11.963 SYMLINK libspdk_bdev_null.so 00:05:11.963 SYMLINK libspdk_bdev_split.so 00:05:11.963 SYMLINK libspdk_bdev_passthru.so 00:05:11.963 CC module/bdev/raid/concat.o 00:05:11.963 LIB libspdk_bdev_lvol.a 00:05:11.963 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:11.963 CC module/bdev/uring/bdev_uring.o 00:05:11.963 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:11.963 SO libspdk_bdev_lvol.so.6.0 00:05:11.963 CC module/bdev/aio/bdev_aio.o 00:05:12.223 SYMLINK libspdk_bdev_lvol.so 00:05:12.223 CC module/bdev/uring/bdev_uring_rpc.o 00:05:12.223 CC module/bdev/ftl/bdev_ftl.o 00:05:12.223 CC module/bdev/iscsi/bdev_iscsi.o 00:05:12.223 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:12.223 LIB libspdk_bdev_raid.a 00:05:12.223 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:12.223 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:12.223 SO libspdk_bdev_raid.so.6.0 00:05:12.482 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:12.482 LIB libspdk_bdev_zone_block.a 00:05:12.482 LIB libspdk_bdev_uring.a 00:05:12.482 SYMLINK libspdk_bdev_raid.so 00:05:12.482 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:12.482 SO libspdk_bdev_zone_block.so.6.0 00:05:12.482 SO libspdk_bdev_uring.so.6.0 00:05:12.482 CC module/bdev/aio/bdev_aio_rpc.o 00:05:12.482 SYMLINK libspdk_bdev_uring.so 00:05:12.482 SYMLINK libspdk_bdev_zone_block.so 00:05:12.482 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:12.482 CC module/bdev/nvme/nvme_rpc.o 00:05:12.482 CC module/bdev/nvme/bdev_mdns_client.o 00:05:12.741 LIB libspdk_bdev_iscsi.a 00:05:12.741 CC module/bdev/nvme/vbdev_opal.o 00:05:12.741 LIB libspdk_bdev_ftl.a 00:05:12.741 SO libspdk_bdev_iscsi.so.6.0 00:05:12.741 LIB libspdk_bdev_aio.a 00:05:12.741 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:12.741 SO libspdk_bdev_ftl.so.6.0 00:05:12.741 SO libspdk_bdev_aio.so.6.0 00:05:12.741 SYMLINK libspdk_bdev_iscsi.so 00:05:12.741 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:12.741 SYMLINK libspdk_bdev_ftl.so 00:05:12.741 SYMLINK libspdk_bdev_aio.so 00:05:12.741 LIB libspdk_bdev_virtio.a 00:05:12.999 SO libspdk_bdev_virtio.so.6.0 00:05:12.999 SYMLINK libspdk_bdev_virtio.so 00:05:13.936 LIB libspdk_bdev_nvme.a 00:05:13.936 SO libspdk_bdev_nvme.so.7.1 00:05:14.195 SYMLINK libspdk_bdev_nvme.so 00:05:14.762 CC module/event/subsystems/iobuf/iobuf.o 00:05:14.762 CC module/event/subsystems/sock/sock.o 00:05:14.762 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:14.762 CC module/event/subsystems/vmd/vmd.o 00:05:14.762 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:14.762 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:14.762 CC module/event/subsystems/keyring/keyring.o 00:05:14.762 CC module/event/subsystems/scheduler/scheduler.o 00:05:14.762 CC module/event/subsystems/fsdev/fsdev.o 00:05:14.762 LIB libspdk_event_keyring.a 00:05:14.762 LIB libspdk_event_fsdev.a 00:05:14.762 LIB libspdk_event_vmd.a 00:05:14.762 LIB libspdk_event_sock.a 00:05:15.021 LIB libspdk_event_iobuf.a 00:05:15.021 SO libspdk_event_keyring.so.1.0 00:05:15.021 LIB libspdk_event_vhost_blk.a 00:05:15.021 SO libspdk_event_fsdev.so.1.0 00:05:15.021 LIB libspdk_event_scheduler.a 00:05:15.021 SO libspdk_event_vmd.so.6.0 00:05:15.021 SO libspdk_event_sock.so.5.0 00:05:15.021 SO libspdk_event_vhost_blk.so.3.0 00:05:15.021 SO libspdk_event_iobuf.so.3.0 00:05:15.021 SO libspdk_event_scheduler.so.4.0 00:05:15.021 SYMLINK libspdk_event_keyring.so 00:05:15.021 SYMLINK libspdk_event_vmd.so 00:05:15.021 SYMLINK libspdk_event_fsdev.so 00:05:15.021 SYMLINK libspdk_event_sock.so 00:05:15.021 SYMLINK libspdk_event_vhost_blk.so 00:05:15.021 SYMLINK libspdk_event_scheduler.so 00:05:15.021 SYMLINK libspdk_event_iobuf.so 00:05:15.280 CC module/event/subsystems/accel/accel.o 00:05:15.538 LIB libspdk_event_accel.a 00:05:15.538 SO libspdk_event_accel.so.6.0 00:05:15.538 SYMLINK libspdk_event_accel.so 00:05:15.796 CC module/event/subsystems/bdev/bdev.o 00:05:16.054 LIB libspdk_event_bdev.a 00:05:16.054 SO libspdk_event_bdev.so.6.0 00:05:16.054 SYMLINK libspdk_event_bdev.so 00:05:16.312 CC module/event/subsystems/nbd/nbd.o 00:05:16.312 CC module/event/subsystems/scsi/scsi.o 00:05:16.312 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:16.312 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:16.312 CC module/event/subsystems/ublk/ublk.o 00:05:16.570 LIB libspdk_event_ublk.a 00:05:16.570 LIB libspdk_event_nbd.a 00:05:16.570 SO libspdk_event_ublk.so.3.0 00:05:16.570 LIB libspdk_event_scsi.a 00:05:16.570 SO libspdk_event_nbd.so.6.0 00:05:16.570 SO libspdk_event_scsi.so.6.0 00:05:16.570 SYMLINK libspdk_event_ublk.so 00:05:16.829 SYMLINK libspdk_event_nbd.so 00:05:16.829 LIB libspdk_event_nvmf.a 00:05:16.829 SYMLINK libspdk_event_scsi.so 00:05:16.829 SO libspdk_event_nvmf.so.6.0 00:05:16.829 SYMLINK libspdk_event_nvmf.so 00:05:17.088 CC module/event/subsystems/iscsi/iscsi.o 00:05:17.088 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:17.347 LIB libspdk_event_iscsi.a 00:05:17.347 LIB libspdk_event_vhost_scsi.a 00:05:17.347 SO libspdk_event_iscsi.so.6.0 00:05:17.347 SO libspdk_event_vhost_scsi.so.3.0 00:05:17.347 SYMLINK libspdk_event_iscsi.so 00:05:17.347 SYMLINK libspdk_event_vhost_scsi.so 00:05:17.607 SO libspdk.so.6.0 00:05:17.607 SYMLINK libspdk.so 00:05:17.866 CC test/rpc_client/rpc_client_test.o 00:05:17.866 CXX app/trace/trace.o 00:05:17.866 TEST_HEADER include/spdk/accel.h 00:05:17.866 TEST_HEADER include/spdk/accel_module.h 00:05:17.866 TEST_HEADER include/spdk/assert.h 00:05:17.866 TEST_HEADER include/spdk/barrier.h 00:05:17.866 TEST_HEADER include/spdk/base64.h 00:05:17.866 TEST_HEADER include/spdk/bdev.h 00:05:17.866 TEST_HEADER include/spdk/bdev_module.h 00:05:17.866 TEST_HEADER include/spdk/bdev_zone.h 00:05:17.866 TEST_HEADER include/spdk/bit_array.h 00:05:17.866 TEST_HEADER include/spdk/bit_pool.h 00:05:17.866 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:17.866 TEST_HEADER include/spdk/blob_bdev.h 00:05:17.866 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:17.866 TEST_HEADER include/spdk/blobfs.h 00:05:17.866 TEST_HEADER include/spdk/blob.h 00:05:17.866 TEST_HEADER include/spdk/conf.h 00:05:17.866 TEST_HEADER include/spdk/config.h 00:05:17.866 TEST_HEADER include/spdk/cpuset.h 00:05:17.866 TEST_HEADER include/spdk/crc16.h 00:05:17.866 TEST_HEADER include/spdk/crc32.h 00:05:17.866 TEST_HEADER include/spdk/crc64.h 00:05:17.866 TEST_HEADER include/spdk/dif.h 00:05:17.866 TEST_HEADER include/spdk/dma.h 00:05:17.866 TEST_HEADER include/spdk/endian.h 00:05:17.866 TEST_HEADER include/spdk/env_dpdk.h 00:05:17.866 TEST_HEADER include/spdk/env.h 00:05:17.866 TEST_HEADER include/spdk/event.h 00:05:17.866 CC examples/ioat/perf/perf.o 00:05:17.866 TEST_HEADER include/spdk/fd_group.h 00:05:17.866 CC test/thread/poller_perf/poller_perf.o 00:05:17.866 TEST_HEADER include/spdk/fd.h 00:05:17.866 TEST_HEADER include/spdk/file.h 00:05:17.866 TEST_HEADER include/spdk/fsdev.h 00:05:17.866 TEST_HEADER include/spdk/fsdev_module.h 00:05:17.866 CC examples/util/zipf/zipf.o 00:05:17.866 TEST_HEADER include/spdk/ftl.h 00:05:17.866 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:17.866 TEST_HEADER include/spdk/gpt_spec.h 00:05:17.866 TEST_HEADER include/spdk/hexlify.h 00:05:17.866 TEST_HEADER include/spdk/histogram_data.h 00:05:17.866 TEST_HEADER include/spdk/idxd.h 00:05:17.866 TEST_HEADER include/spdk/idxd_spec.h 00:05:17.866 TEST_HEADER include/spdk/init.h 00:05:17.866 CC test/dma/test_dma/test_dma.o 00:05:17.866 TEST_HEADER include/spdk/ioat.h 00:05:17.866 TEST_HEADER include/spdk/ioat_spec.h 00:05:17.866 TEST_HEADER include/spdk/iscsi_spec.h 00:05:17.866 CC test/app/bdev_svc/bdev_svc.o 00:05:17.866 TEST_HEADER include/spdk/json.h 00:05:17.866 TEST_HEADER include/spdk/jsonrpc.h 00:05:17.866 TEST_HEADER include/spdk/keyring.h 00:05:17.866 TEST_HEADER include/spdk/keyring_module.h 00:05:17.866 TEST_HEADER include/spdk/likely.h 00:05:17.866 TEST_HEADER include/spdk/log.h 00:05:17.866 TEST_HEADER include/spdk/lvol.h 00:05:17.866 TEST_HEADER include/spdk/md5.h 00:05:17.866 TEST_HEADER include/spdk/memory.h 00:05:17.866 TEST_HEADER include/spdk/mmio.h 00:05:17.866 TEST_HEADER include/spdk/nbd.h 00:05:17.866 TEST_HEADER include/spdk/net.h 00:05:17.866 TEST_HEADER include/spdk/notify.h 00:05:18.128 TEST_HEADER include/spdk/nvme.h 00:05:18.128 TEST_HEADER include/spdk/nvme_intel.h 00:05:18.128 CC test/env/mem_callbacks/mem_callbacks.o 00:05:18.128 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:18.128 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:18.128 TEST_HEADER include/spdk/nvme_spec.h 00:05:18.128 TEST_HEADER include/spdk/nvme_zns.h 00:05:18.128 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:18.128 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:18.128 TEST_HEADER include/spdk/nvmf.h 00:05:18.128 TEST_HEADER include/spdk/nvmf_spec.h 00:05:18.128 TEST_HEADER include/spdk/nvmf_transport.h 00:05:18.128 TEST_HEADER include/spdk/opal.h 00:05:18.128 LINK rpc_client_test 00:05:18.128 TEST_HEADER include/spdk/opal_spec.h 00:05:18.128 TEST_HEADER include/spdk/pci_ids.h 00:05:18.128 TEST_HEADER include/spdk/pipe.h 00:05:18.128 TEST_HEADER include/spdk/queue.h 00:05:18.128 TEST_HEADER include/spdk/reduce.h 00:05:18.128 TEST_HEADER include/spdk/rpc.h 00:05:18.128 TEST_HEADER include/spdk/scheduler.h 00:05:18.128 TEST_HEADER include/spdk/scsi.h 00:05:18.128 TEST_HEADER include/spdk/scsi_spec.h 00:05:18.128 TEST_HEADER include/spdk/sock.h 00:05:18.128 TEST_HEADER include/spdk/stdinc.h 00:05:18.128 TEST_HEADER include/spdk/string.h 00:05:18.128 TEST_HEADER include/spdk/thread.h 00:05:18.128 TEST_HEADER include/spdk/trace.h 00:05:18.128 TEST_HEADER include/spdk/trace_parser.h 00:05:18.128 TEST_HEADER include/spdk/tree.h 00:05:18.128 TEST_HEADER include/spdk/ublk.h 00:05:18.128 TEST_HEADER include/spdk/util.h 00:05:18.128 TEST_HEADER include/spdk/uuid.h 00:05:18.128 TEST_HEADER include/spdk/version.h 00:05:18.128 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:18.128 LINK interrupt_tgt 00:05:18.128 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:18.128 TEST_HEADER include/spdk/vhost.h 00:05:18.128 TEST_HEADER include/spdk/vmd.h 00:05:18.128 TEST_HEADER include/spdk/xor.h 00:05:18.128 TEST_HEADER include/spdk/zipf.h 00:05:18.128 CXX test/cpp_headers/accel.o 00:05:18.128 LINK poller_perf 00:05:18.128 LINK zipf 00:05:18.128 LINK ioat_perf 00:05:18.128 CXX test/cpp_headers/accel_module.o 00:05:18.128 LINK bdev_svc 00:05:18.389 CXX test/cpp_headers/assert.o 00:05:18.389 LINK spdk_trace 00:05:18.389 CC test/app/histogram_perf/histogram_perf.o 00:05:18.389 CC test/app/jsoncat/jsoncat.o 00:05:18.389 CC examples/ioat/verify/verify.o 00:05:18.389 CXX test/cpp_headers/barrier.o 00:05:18.389 CC test/app/stub/stub.o 00:05:18.648 CC test/env/vtophys/vtophys.o 00:05:18.648 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:18.648 LINK test_dma 00:05:18.648 CC app/trace_record/trace_record.o 00:05:18.648 LINK histogram_perf 00:05:18.648 LINK jsoncat 00:05:18.648 CXX test/cpp_headers/base64.o 00:05:18.648 LINK verify 00:05:18.648 LINK mem_callbacks 00:05:18.648 LINK vtophys 00:05:18.648 LINK stub 00:05:18.907 CXX test/cpp_headers/bdev.o 00:05:18.907 CC test/env/memory/memory_ut.o 00:05:18.907 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:18.907 LINK spdk_trace_record 00:05:18.907 CC test/event/event_perf/event_perf.o 00:05:18.907 CC test/env/pci/pci_ut.o 00:05:18.907 LINK nvme_fuzz 00:05:19.166 LINK env_dpdk_post_init 00:05:19.166 CC examples/thread/thread/thread_ex.o 00:05:19.166 CXX test/cpp_headers/bdev_module.o 00:05:19.166 CC examples/sock/hello_world/hello_sock.o 00:05:19.166 LINK event_perf 00:05:19.166 CC test/nvme/aer/aer.o 00:05:19.166 CC app/nvmf_tgt/nvmf_main.o 00:05:19.166 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:19.166 CXX test/cpp_headers/bdev_zone.o 00:05:19.425 LINK thread 00:05:19.425 LINK nvmf_tgt 00:05:19.425 LINK pci_ut 00:05:19.425 CC test/event/reactor/reactor.o 00:05:19.425 LINK hello_sock 00:05:19.425 CC app/iscsi_tgt/iscsi_tgt.o 00:05:19.425 LINK aer 00:05:19.425 CXX test/cpp_headers/bit_array.o 00:05:19.684 LINK reactor 00:05:19.684 LINK iscsi_tgt 00:05:19.684 CC app/spdk_lspci/spdk_lspci.o 00:05:19.684 CC app/spdk_tgt/spdk_tgt.o 00:05:19.684 CXX test/cpp_headers/bit_pool.o 00:05:19.684 CC test/nvme/reset/reset.o 00:05:19.684 CC app/spdk_nvme_perf/perf.o 00:05:19.684 CC examples/vmd/lsvmd/lsvmd.o 00:05:19.944 LINK spdk_lspci 00:05:19.944 CC test/event/reactor_perf/reactor_perf.o 00:05:19.944 CXX test/cpp_headers/blob_bdev.o 00:05:19.944 LINK lsvmd 00:05:19.944 LINK spdk_tgt 00:05:19.944 LINK reset 00:05:19.944 LINK reactor_perf 00:05:19.944 CC app/spdk_nvme_identify/identify.o 00:05:19.944 CC app/spdk_nvme_discover/discovery_aer.o 00:05:20.203 CXX test/cpp_headers/blobfs_bdev.o 00:05:20.203 LINK memory_ut 00:05:20.203 CC examples/vmd/led/led.o 00:05:20.203 CC test/nvme/sgl/sgl.o 00:05:20.203 CC app/spdk_top/spdk_top.o 00:05:20.203 CC test/event/app_repeat/app_repeat.o 00:05:20.203 LINK spdk_nvme_discover 00:05:20.203 CXX test/cpp_headers/blobfs.o 00:05:20.462 LINK led 00:05:20.462 LINK app_repeat 00:05:20.462 LINK sgl 00:05:20.462 CXX test/cpp_headers/blob.o 00:05:20.462 CC test/accel/dif/dif.o 00:05:20.722 CC app/vhost/vhost.o 00:05:20.722 CC examples/idxd/perf/perf.o 00:05:20.722 CXX test/cpp_headers/conf.o 00:05:20.722 LINK spdk_nvme_perf 00:05:20.722 CC test/nvme/e2edp/nvme_dp.o 00:05:20.722 CC test/event/scheduler/scheduler.o 00:05:20.722 LINK spdk_nvme_identify 00:05:20.981 LINK vhost 00:05:20.981 CXX test/cpp_headers/config.o 00:05:20.981 CXX test/cpp_headers/cpuset.o 00:05:20.981 LINK iscsi_fuzz 00:05:20.981 CC app/spdk_dd/spdk_dd.o 00:05:20.981 LINK scheduler 00:05:20.981 LINK idxd_perf 00:05:20.981 LINK nvme_dp 00:05:20.981 CXX test/cpp_headers/crc16.o 00:05:21.239 LINK spdk_top 00:05:21.239 CC test/blobfs/mkfs/mkfs.o 00:05:21.239 CXX test/cpp_headers/crc32.o 00:05:21.240 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:21.240 LINK dif 00:05:21.240 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:21.240 CC test/nvme/overhead/overhead.o 00:05:21.240 CC test/lvol/esnap/esnap.o 00:05:21.499 CC app/fio/nvme/fio_plugin.o 00:05:21.499 LINK mkfs 00:05:21.499 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:21.499 CXX test/cpp_headers/crc64.o 00:05:21.499 LINK spdk_dd 00:05:21.499 CXX test/cpp_headers/dif.o 00:05:21.499 CC examples/accel/perf/accel_perf.o 00:05:21.758 LINK overhead 00:05:21.758 CXX test/cpp_headers/dma.o 00:05:21.758 CXX test/cpp_headers/endian.o 00:05:21.758 LINK vhost_fuzz 00:05:21.758 LINK hello_fsdev 00:05:21.758 CC app/fio/bdev/fio_plugin.o 00:05:21.758 CC test/bdev/bdevio/bdevio.o 00:05:22.018 CXX test/cpp_headers/env_dpdk.o 00:05:22.018 CC test/nvme/err_injection/err_injection.o 00:05:22.018 CXX test/cpp_headers/env.o 00:05:22.018 LINK spdk_nvme 00:05:22.018 CC test/nvme/startup/startup.o 00:05:22.018 CXX test/cpp_headers/event.o 00:05:22.018 CXX test/cpp_headers/fd_group.o 00:05:22.018 CC examples/blob/hello_world/hello_blob.o 00:05:22.018 LINK accel_perf 00:05:22.018 CXX test/cpp_headers/fd.o 00:05:22.018 LINK err_injection 00:05:22.276 LINK bdevio 00:05:22.276 LINK startup 00:05:22.276 CXX test/cpp_headers/file.o 00:05:22.276 CXX test/cpp_headers/fsdev.o 00:05:22.276 CXX test/cpp_headers/fsdev_module.o 00:05:22.276 LINK spdk_bdev 00:05:22.276 LINK hello_blob 00:05:22.536 CC examples/nvme/hello_world/hello_world.o 00:05:22.536 CC examples/blob/cli/blobcli.o 00:05:22.536 CXX test/cpp_headers/ftl.o 00:05:22.536 CC examples/nvme/reconnect/reconnect.o 00:05:22.536 CC test/nvme/reserve/reserve.o 00:05:22.536 CXX test/cpp_headers/fuse_dispatcher.o 00:05:22.536 CC test/nvme/simple_copy/simple_copy.o 00:05:22.536 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:22.795 CXX test/cpp_headers/gpt_spec.o 00:05:22.795 LINK hello_world 00:05:22.795 LINK reserve 00:05:22.795 CC test/nvme/connect_stress/connect_stress.o 00:05:22.795 CC examples/bdev/hello_world/hello_bdev.o 00:05:22.795 LINK simple_copy 00:05:22.795 CXX test/cpp_headers/hexlify.o 00:05:22.795 CXX test/cpp_headers/histogram_data.o 00:05:22.795 LINK reconnect 00:05:23.054 LINK blobcli 00:05:23.054 LINK connect_stress 00:05:23.054 CC examples/bdev/bdevperf/bdevperf.o 00:05:23.054 LINK nvme_manage 00:05:23.054 LINK hello_bdev 00:05:23.054 CC test/nvme/boot_partition/boot_partition.o 00:05:23.054 CXX test/cpp_headers/idxd.o 00:05:23.054 CC test/nvme/compliance/nvme_compliance.o 00:05:23.054 CC test/nvme/fused_ordering/fused_ordering.o 00:05:23.312 CXX test/cpp_headers/idxd_spec.o 00:05:23.312 CXX test/cpp_headers/init.o 00:05:23.312 CC examples/nvme/arbitration/arbitration.o 00:05:23.312 CXX test/cpp_headers/ioat.o 00:05:23.312 LINK boot_partition 00:05:23.312 CXX test/cpp_headers/ioat_spec.o 00:05:23.312 CXX test/cpp_headers/iscsi_spec.o 00:05:23.312 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:23.312 LINK fused_ordering 00:05:23.571 CXX test/cpp_headers/json.o 00:05:23.571 LINK nvme_compliance 00:05:23.571 CC test/nvme/fdp/fdp.o 00:05:23.571 LINK doorbell_aers 00:05:23.571 LINK arbitration 00:05:23.571 CC examples/nvme/hotplug/hotplug.o 00:05:23.571 CXX test/cpp_headers/jsonrpc.o 00:05:23.571 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:23.571 CC test/nvme/cuse/cuse.o 00:05:23.571 CXX test/cpp_headers/keyring.o 00:05:23.829 LINK bdevperf 00:05:23.829 CXX test/cpp_headers/keyring_module.o 00:05:23.829 CC examples/nvme/abort/abort.o 00:05:23.829 CXX test/cpp_headers/likely.o 00:05:23.829 LINK cmb_copy 00:05:23.829 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:23.829 LINK hotplug 00:05:24.088 LINK fdp 00:05:24.088 CXX test/cpp_headers/log.o 00:05:24.088 CXX test/cpp_headers/lvol.o 00:05:24.088 CXX test/cpp_headers/md5.o 00:05:24.088 CXX test/cpp_headers/memory.o 00:05:24.088 CXX test/cpp_headers/mmio.o 00:05:24.088 LINK pmr_persistence 00:05:24.088 CXX test/cpp_headers/nbd.o 00:05:24.088 CXX test/cpp_headers/net.o 00:05:24.348 CXX test/cpp_headers/notify.o 00:05:24.348 LINK abort 00:05:24.348 CXX test/cpp_headers/nvme.o 00:05:24.348 CXX test/cpp_headers/nvme_intel.o 00:05:24.348 CXX test/cpp_headers/nvme_ocssd.o 00:05:24.348 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:24.348 CXX test/cpp_headers/nvme_spec.o 00:05:24.348 CXX test/cpp_headers/nvme_zns.o 00:05:24.348 CXX test/cpp_headers/nvmf_cmd.o 00:05:24.348 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:24.348 CXX test/cpp_headers/nvmf.o 00:05:24.348 CXX test/cpp_headers/nvmf_spec.o 00:05:24.608 CXX test/cpp_headers/nvmf_transport.o 00:05:24.608 CXX test/cpp_headers/opal.o 00:05:24.608 CXX test/cpp_headers/opal_spec.o 00:05:24.608 CC examples/nvmf/nvmf/nvmf.o 00:05:24.608 CXX test/cpp_headers/pci_ids.o 00:05:24.608 CXX test/cpp_headers/pipe.o 00:05:24.608 CXX test/cpp_headers/queue.o 00:05:24.608 CXX test/cpp_headers/reduce.o 00:05:24.608 CXX test/cpp_headers/rpc.o 00:05:24.608 CXX test/cpp_headers/scheduler.o 00:05:24.867 CXX test/cpp_headers/scsi.o 00:05:24.867 CXX test/cpp_headers/scsi_spec.o 00:05:24.867 CXX test/cpp_headers/sock.o 00:05:24.867 CXX test/cpp_headers/stdinc.o 00:05:24.867 CXX test/cpp_headers/string.o 00:05:24.867 CXX test/cpp_headers/thread.o 00:05:24.867 LINK nvmf 00:05:24.867 CXX test/cpp_headers/trace.o 00:05:24.867 CXX test/cpp_headers/trace_parser.o 00:05:24.867 CXX test/cpp_headers/tree.o 00:05:24.867 CXX test/cpp_headers/ublk.o 00:05:24.867 CXX test/cpp_headers/util.o 00:05:25.126 CXX test/cpp_headers/uuid.o 00:05:25.126 CXX test/cpp_headers/version.o 00:05:25.126 CXX test/cpp_headers/vfio_user_pci.o 00:05:25.126 CXX test/cpp_headers/vfio_user_spec.o 00:05:25.126 CXX test/cpp_headers/vhost.o 00:05:25.126 LINK cuse 00:05:25.126 CXX test/cpp_headers/vmd.o 00:05:25.126 CXX test/cpp_headers/xor.o 00:05:25.126 CXX test/cpp_headers/zipf.o 00:05:26.506 LINK esnap 00:05:27.075 00:05:27.075 real 1m33.401s 00:05:27.075 user 8m18.072s 00:05:27.075 sys 1m46.903s 00:05:27.075 08:38:57 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:27.075 ************************************ 00:05:27.075 END TEST make 00:05:27.075 ************************************ 00:05:27.075 08:38:57 make -- common/autotest_common.sh@10 -- $ set +x 00:05:27.075 08:38:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:27.075 08:38:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:27.075 08:38:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:27.075 08:38:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:27.075 08:38:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:27.075 08:38:57 -- pm/common@44 -- $ pid=5252 00:05:27.075 08:38:57 -- pm/common@50 -- $ kill -TERM 5252 00:05:27.075 08:38:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:27.075 08:38:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:27.075 08:38:57 -- pm/common@44 -- $ pid=5254 00:05:27.075 08:38:57 -- pm/common@50 -- $ kill -TERM 5254 00:05:27.075 08:38:57 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:27.075 08:38:57 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:27.075 08:38:57 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.075 08:38:57 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.075 08:38:57 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.075 08:38:57 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.075 08:38:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.075 08:38:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.075 08:38:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.075 08:38:57 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.075 08:38:57 -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.075 08:38:57 -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.075 08:38:57 -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.075 08:38:57 -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.075 08:38:57 -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.075 08:38:57 -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.075 08:38:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.075 08:38:57 -- scripts/common.sh@344 -- # case "$op" in 00:05:27.075 08:38:57 -- scripts/common.sh@345 -- # : 1 00:05:27.075 08:38:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.075 08:38:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.075 08:38:57 -- scripts/common.sh@365 -- # decimal 1 00:05:27.075 08:38:57 -- scripts/common.sh@353 -- # local d=1 00:05:27.075 08:38:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.075 08:38:57 -- scripts/common.sh@355 -- # echo 1 00:05:27.334 08:38:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.334 08:38:57 -- scripts/common.sh@366 -- # decimal 2 00:05:27.334 08:38:57 -- scripts/common.sh@353 -- # local d=2 00:05:27.334 08:38:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.334 08:38:57 -- scripts/common.sh@355 -- # echo 2 00:05:27.334 08:38:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.334 08:38:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.334 08:38:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.334 08:38:57 -- scripts/common.sh@368 -- # return 0 00:05:27.334 08:38:57 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.334 08:38:57 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.334 --rc genhtml_branch_coverage=1 00:05:27.334 --rc genhtml_function_coverage=1 00:05:27.334 --rc genhtml_legend=1 00:05:27.334 --rc geninfo_all_blocks=1 00:05:27.334 --rc geninfo_unexecuted_blocks=1 00:05:27.334 00:05:27.334 ' 00:05:27.334 08:38:57 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.334 --rc genhtml_branch_coverage=1 00:05:27.334 --rc genhtml_function_coverage=1 00:05:27.334 --rc genhtml_legend=1 00:05:27.334 --rc geninfo_all_blocks=1 00:05:27.334 --rc geninfo_unexecuted_blocks=1 00:05:27.334 00:05:27.334 ' 00:05:27.334 08:38:57 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.335 --rc genhtml_branch_coverage=1 00:05:27.335 --rc genhtml_function_coverage=1 00:05:27.335 --rc genhtml_legend=1 00:05:27.335 --rc geninfo_all_blocks=1 00:05:27.335 --rc geninfo_unexecuted_blocks=1 00:05:27.335 00:05:27.335 ' 00:05:27.335 08:38:57 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.335 --rc genhtml_branch_coverage=1 00:05:27.335 --rc genhtml_function_coverage=1 00:05:27.335 --rc genhtml_legend=1 00:05:27.335 --rc geninfo_all_blocks=1 00:05:27.335 --rc geninfo_unexecuted_blocks=1 00:05:27.335 00:05:27.335 ' 00:05:27.335 08:38:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:27.335 08:38:57 -- nvmf/common.sh@7 -- # uname -s 00:05:27.335 08:38:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.335 08:38:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.335 08:38:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.335 08:38:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.335 08:38:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.335 08:38:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.335 08:38:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.335 08:38:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.335 08:38:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.335 08:38:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.335 08:38:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:05:27.335 08:38:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:05:27.335 08:38:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.335 08:38:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.335 08:38:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:27.335 08:38:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.335 08:38:58 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:27.335 08:38:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.335 08:38:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.335 08:38:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.335 08:38:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.335 08:38:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.335 08:38:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.335 08:38:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.335 08:38:58 -- paths/export.sh@5 -- # export PATH 00:05:27.335 08:38:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.335 08:38:58 -- nvmf/common.sh@51 -- # : 0 00:05:27.335 08:38:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.335 08:38:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.335 08:38:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.335 08:38:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.335 08:38:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.335 08:38:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.335 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.335 08:38:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.335 08:38:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.335 08:38:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.335 08:38:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:27.335 08:38:58 -- spdk/autotest.sh@32 -- # uname -s 00:05:27.335 08:38:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:27.335 08:38:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:27.335 08:38:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:27.335 08:38:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:27.335 08:38:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:27.335 08:38:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:27.335 08:38:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:27.335 08:38:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:27.335 08:38:58 -- spdk/autotest.sh@48 -- # udevadm_pid=54404 00:05:27.335 08:38:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:27.335 08:38:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:27.335 08:38:58 -- pm/common@17 -- # local monitor 00:05:27.335 08:38:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:27.335 08:38:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:27.335 08:38:58 -- pm/common@25 -- # sleep 1 00:05:27.335 08:38:58 -- pm/common@21 -- # date +%s 00:05:27.335 08:38:58 -- pm/common@21 -- # date +%s 00:05:27.335 08:38:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732091938 00:05:27.335 08:38:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732091938 00:05:27.335 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732091938_collect-vmstat.pm.log 00:05:27.335 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732091938_collect-cpu-load.pm.log 00:05:28.272 08:38:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:28.272 08:38:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:28.272 08:38:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.272 08:38:59 -- common/autotest_common.sh@10 -- # set +x 00:05:28.272 08:38:59 -- spdk/autotest.sh@59 -- # create_test_list 00:05:28.272 08:38:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:28.272 08:38:59 -- common/autotest_common.sh@10 -- # set +x 00:05:28.272 08:38:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:28.272 08:38:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:28.531 08:38:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:28.531 08:38:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:28.531 08:38:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:28.531 08:38:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:28.531 08:38:59 -- common/autotest_common.sh@1457 -- # uname 00:05:28.531 08:38:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:28.531 08:38:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:28.531 08:38:59 -- common/autotest_common.sh@1477 -- # uname 00:05:28.531 08:38:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:28.531 08:38:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:28.531 08:38:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:28.531 lcov: LCOV version 1.15 00:05:28.531 08:38:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:46.640 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:46.640 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:04.726 08:39:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:04.726 08:39:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.726 08:39:33 -- common/autotest_common.sh@10 -- # set +x 00:06:04.726 08:39:33 -- spdk/autotest.sh@78 -- # rm -f 00:06:04.726 08:39:33 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:04.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:04.726 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:04.726 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:04.726 08:39:34 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:04.726 08:39:34 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:04.726 08:39:34 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:04.726 08:39:34 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:04.726 08:39:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:04.726 08:39:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:04.726 08:39:34 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:04.726 08:39:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:04.726 08:39:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:04.726 08:39:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:04.726 08:39:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:04.726 08:39:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:04.726 08:39:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:04.726 08:39:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:04.726 08:39:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:04.726 08:39:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:04.726 08:39:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:04.726 08:39:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:04.726 08:39:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:04.726 08:39:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:04.726 08:39:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:04.726 08:39:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:04.726 08:39:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:04.726 08:39:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:04.726 08:39:34 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:04.726 08:39:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:04.726 08:39:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:04.726 08:39:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:04.726 08:39:34 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:04.726 08:39:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:04.726 No valid GPT data, bailing 00:06:04.726 08:39:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:04.726 08:39:34 -- scripts/common.sh@394 -- # pt= 00:06:04.726 08:39:34 -- scripts/common.sh@395 -- # return 1 00:06:04.726 08:39:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:04.726 1+0 records in 00:06:04.726 1+0 records out 00:06:04.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050497 s, 208 MB/s 00:06:04.727 08:39:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:04.727 08:39:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:04.727 08:39:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:04.727 08:39:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:04.727 08:39:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:04.727 No valid GPT data, bailing 00:06:04.727 08:39:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:04.727 08:39:34 -- scripts/common.sh@394 -- # pt= 00:06:04.727 08:39:34 -- scripts/common.sh@395 -- # return 1 00:06:04.727 08:39:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:04.727 1+0 records in 00:06:04.727 1+0 records out 00:06:04.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00538232 s, 195 MB/s 00:06:04.727 08:39:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:04.727 08:39:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:04.727 08:39:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:04.727 08:39:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:04.727 08:39:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:04.727 No valid GPT data, bailing 00:06:04.727 08:39:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:04.727 08:39:34 -- scripts/common.sh@394 -- # pt= 00:06:04.727 08:39:34 -- scripts/common.sh@395 -- # return 1 00:06:04.727 08:39:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:04.727 1+0 records in 00:06:04.727 1+0 records out 00:06:04.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00547937 s, 191 MB/s 00:06:04.727 08:39:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:04.727 08:39:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:04.727 08:39:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:04.727 08:39:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:04.727 08:39:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:04.727 No valid GPT data, bailing 00:06:04.727 08:39:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:04.727 08:39:34 -- scripts/common.sh@394 -- # pt= 00:06:04.727 08:39:34 -- scripts/common.sh@395 -- # return 1 00:06:04.727 08:39:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:04.727 1+0 records in 00:06:04.727 1+0 records out 00:06:04.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456181 s, 230 MB/s 00:06:04.727 08:39:34 -- spdk/autotest.sh@105 -- # sync 00:06:04.727 08:39:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:04.727 08:39:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:04.727 08:39:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:06.103 08:39:36 -- spdk/autotest.sh@111 -- # uname -s 00:06:06.103 08:39:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:06.103 08:39:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:06.103 08:39:36 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:06.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:06.670 Hugepages 00:06:06.670 node hugesize free / total 00:06:06.670 node0 1048576kB 0 / 0 00:06:06.670 node0 2048kB 0 / 0 00:06:06.670 00:06:06.670 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:06.927 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:06.927 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:06.928 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:06.928 08:39:37 -- spdk/autotest.sh@117 -- # uname -s 00:06:06.928 08:39:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:06.928 08:39:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:06.928 08:39:37 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:07.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:07.862 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:07.862 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:07.862 08:39:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:08.799 08:39:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:08.799 08:39:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:08.799 08:39:39 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:08.799 08:39:39 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:08.799 08:39:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:08.799 08:39:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:08.799 08:39:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:08.799 08:39:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:08.799 08:39:39 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:09.058 08:39:39 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:09.058 08:39:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:09.058 08:39:39 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:09.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:09.315 Waiting for block devices as requested 00:06:09.315 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:09.572 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:09.572 08:39:40 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:09.572 08:39:40 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:09.572 08:39:40 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:09.572 08:39:40 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:09.572 08:39:40 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:09.572 08:39:40 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:09.572 08:39:40 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:09.572 08:39:40 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:09.572 08:39:40 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:09.572 08:39:40 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:09.572 08:39:40 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:09.573 08:39:40 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:09.573 08:39:40 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:09.573 08:39:40 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:09.573 08:39:40 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:09.573 08:39:40 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:09.573 08:39:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:09.573 08:39:40 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:09.573 08:39:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:09.573 08:39:40 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:09.573 08:39:40 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:09.573 08:39:40 -- common/autotest_common.sh@1543 -- # continue 00:06:09.573 08:39:40 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:09.573 08:39:40 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:09.573 08:39:40 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:09.573 08:39:40 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:09.573 08:39:40 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:09.573 08:39:40 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:09.573 08:39:40 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:09.573 08:39:40 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:09.573 08:39:40 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:09.573 08:39:40 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:09.573 08:39:40 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:09.573 08:39:40 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:09.573 08:39:40 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:09.573 08:39:40 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:09.573 08:39:40 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:09.573 08:39:40 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:09.573 08:39:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:09.573 08:39:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:09.573 08:39:40 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:09.573 08:39:40 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:09.573 08:39:40 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:09.573 08:39:40 -- common/autotest_common.sh@1543 -- # continue 00:06:09.573 08:39:40 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:09.573 08:39:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.573 08:39:40 -- common/autotest_common.sh@10 -- # set +x 00:06:09.573 08:39:40 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:09.573 08:39:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:09.573 08:39:40 -- common/autotest_common.sh@10 -- # set +x 00:06:09.573 08:39:40 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:10.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:10.508 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:10.508 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:10.508 08:39:41 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:10.508 08:39:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.508 08:39:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.767 08:39:41 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:10.767 08:39:41 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:10.767 08:39:41 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:10.767 08:39:41 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:10.767 08:39:41 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:10.767 08:39:41 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:10.767 08:39:41 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:10.767 08:39:41 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:10.767 08:39:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:10.767 08:39:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:10.767 08:39:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:10.767 08:39:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:10.767 08:39:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:10.767 08:39:41 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:10.767 08:39:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:10.767 08:39:41 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:10.767 08:39:41 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:10.767 08:39:41 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:10.767 08:39:41 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:10.767 08:39:41 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:10.767 08:39:41 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:10.767 08:39:41 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:10.768 08:39:41 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:10.768 08:39:41 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:10.768 08:39:41 -- common/autotest_common.sh@1572 -- # return 0 00:06:10.768 08:39:41 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:10.768 08:39:41 -- common/autotest_common.sh@1580 -- # return 0 00:06:10.768 08:39:41 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:10.768 08:39:41 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:10.768 08:39:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:10.768 08:39:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:10.768 08:39:41 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:10.768 08:39:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.768 08:39:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.768 08:39:41 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:10.768 08:39:41 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:10.768 08:39:41 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:10.768 08:39:41 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:10.768 08:39:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.768 08:39:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.768 08:39:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.768 ************************************ 00:06:10.768 START TEST env 00:06:10.768 ************************************ 00:06:10.768 08:39:41 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:10.768 * Looking for test storage... 00:06:10.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:10.768 08:39:41 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.768 08:39:41 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.768 08:39:41 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.027 08:39:41 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.027 08:39:41 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.027 08:39:41 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.027 08:39:41 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.027 08:39:41 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.027 08:39:41 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.027 08:39:41 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.027 08:39:41 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.027 08:39:41 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.027 08:39:41 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.027 08:39:41 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.027 08:39:41 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.027 08:39:41 env -- scripts/common.sh@344 -- # case "$op" in 00:06:11.027 08:39:41 env -- scripts/common.sh@345 -- # : 1 00:06:11.027 08:39:41 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.027 08:39:41 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.027 08:39:41 env -- scripts/common.sh@365 -- # decimal 1 00:06:11.027 08:39:41 env -- scripts/common.sh@353 -- # local d=1 00:06:11.027 08:39:41 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.027 08:39:41 env -- scripts/common.sh@355 -- # echo 1 00:06:11.027 08:39:41 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.027 08:39:41 env -- scripts/common.sh@366 -- # decimal 2 00:06:11.027 08:39:41 env -- scripts/common.sh@353 -- # local d=2 00:06:11.027 08:39:41 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.027 08:39:41 env -- scripts/common.sh@355 -- # echo 2 00:06:11.027 08:39:41 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.027 08:39:41 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.027 08:39:41 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.027 08:39:41 env -- scripts/common.sh@368 -- # return 0 00:06:11.027 08:39:41 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.027 08:39:41 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.027 --rc genhtml_branch_coverage=1 00:06:11.027 --rc genhtml_function_coverage=1 00:06:11.027 --rc genhtml_legend=1 00:06:11.027 --rc geninfo_all_blocks=1 00:06:11.027 --rc geninfo_unexecuted_blocks=1 00:06:11.027 00:06:11.027 ' 00:06:11.027 08:39:41 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.027 --rc genhtml_branch_coverage=1 00:06:11.027 --rc genhtml_function_coverage=1 00:06:11.027 --rc genhtml_legend=1 00:06:11.027 --rc geninfo_all_blocks=1 00:06:11.027 --rc geninfo_unexecuted_blocks=1 00:06:11.027 00:06:11.027 ' 00:06:11.027 08:39:41 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.027 --rc genhtml_branch_coverage=1 00:06:11.027 --rc genhtml_function_coverage=1 00:06:11.027 --rc genhtml_legend=1 00:06:11.027 --rc geninfo_all_blocks=1 00:06:11.027 --rc geninfo_unexecuted_blocks=1 00:06:11.027 00:06:11.027 ' 00:06:11.027 08:39:41 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.027 --rc genhtml_branch_coverage=1 00:06:11.027 --rc genhtml_function_coverage=1 00:06:11.027 --rc genhtml_legend=1 00:06:11.027 --rc geninfo_all_blocks=1 00:06:11.027 --rc geninfo_unexecuted_blocks=1 00:06:11.027 00:06:11.027 ' 00:06:11.027 08:39:41 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:11.027 08:39:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.027 08:39:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.027 08:39:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.027 ************************************ 00:06:11.027 START TEST env_memory 00:06:11.027 ************************************ 00:06:11.027 08:39:41 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:11.027 00:06:11.027 00:06:11.027 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.027 http://cunit.sourceforge.net/ 00:06:11.027 00:06:11.027 00:06:11.027 Suite: memory 00:06:11.027 Test: alloc and free memory map ...[2024-11-20 08:39:41.811816] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:11.027 passed 00:06:11.027 Test: mem map translation ...[2024-11-20 08:39:41.843395] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:11.027 [2024-11-20 08:39:41.843443] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:11.027 [2024-11-20 08:39:41.843500] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:11.027 [2024-11-20 08:39:41.843511] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:11.027 passed 00:06:11.027 Test: mem map registration ...[2024-11-20 08:39:41.907446] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:11.027 [2024-11-20 08:39:41.907498] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:11.027 passed 00:06:11.286 Test: mem map adjacent registrations ...passed 00:06:11.286 00:06:11.286 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.286 suites 1 1 n/a 0 0 00:06:11.286 tests 4 4 4 0 0 00:06:11.286 asserts 152 152 152 0 n/a 00:06:11.286 00:06:11.286 Elapsed time = 0.220 seconds 00:06:11.286 00:06:11.286 real 0m0.237s 00:06:11.286 user 0m0.217s 00:06:11.286 sys 0m0.016s 00:06:11.286 08:39:42 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.286 08:39:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:11.286 ************************************ 00:06:11.286 END TEST env_memory 00:06:11.286 ************************************ 00:06:11.287 08:39:42 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:11.287 08:39:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.287 08:39:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.287 08:39:42 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.287 ************************************ 00:06:11.287 START TEST env_vtophys 00:06:11.287 ************************************ 00:06:11.287 08:39:42 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:11.287 EAL: lib.eal log level changed from notice to debug 00:06:11.287 EAL: Detected lcore 0 as core 0 on socket 0 00:06:11.287 EAL: Detected lcore 1 as core 0 on socket 0 00:06:11.287 EAL: Detected lcore 2 as core 0 on socket 0 00:06:11.287 EAL: Detected lcore 3 as core 0 on socket 0 00:06:11.287 EAL: Detected lcore 4 as core 0 on socket 0 00:06:11.287 EAL: Detected lcore 5 as core 0 on socket 0 00:06:11.287 EAL: Detected lcore 6 as core 0 on socket 0 00:06:11.287 EAL: Detected lcore 7 as core 0 on socket 0 00:06:11.287 EAL: Detected lcore 8 as core 0 on socket 0 00:06:11.287 EAL: Detected lcore 9 as core 0 on socket 0 00:06:11.287 EAL: Maximum logical cores by configuration: 128 00:06:11.287 EAL: Detected CPU lcores: 10 00:06:11.287 EAL: Detected NUMA nodes: 1 00:06:11.287 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:11.287 EAL: Detected shared linkage of DPDK 00:06:11.287 EAL: No shared files mode enabled, IPC will be disabled 00:06:11.287 EAL: Selected IOVA mode 'PA' 00:06:11.287 EAL: Probing VFIO support... 00:06:11.287 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:11.287 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:11.287 EAL: Ask a virtual area of 0x2e000 bytes 00:06:11.287 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:11.287 EAL: Setting up physically contiguous memory... 00:06:11.287 EAL: Setting maximum number of open files to 524288 00:06:11.287 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:11.287 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:11.287 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.287 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:11.287 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.287 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.287 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:11.287 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:11.287 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.287 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:11.287 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.287 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.287 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:11.287 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:11.287 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.287 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:11.287 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.287 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.287 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:11.287 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:11.287 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.287 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:11.287 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.287 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.287 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:11.287 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:11.287 EAL: Hugepages will be freed exactly as allocated. 00:06:11.287 EAL: No shared files mode enabled, IPC is disabled 00:06:11.287 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: TSC frequency is ~2200000 KHz 00:06:11.546 EAL: Main lcore 0 is ready (tid=7fd03c886a00;cpuset=[0]) 00:06:11.546 EAL: Trying to obtain current memory policy. 00:06:11.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.546 EAL: Restoring previous memory policy: 0 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was expanded by 2MB 00:06:11.546 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:11.546 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:11.546 EAL: Mem event callback 'spdk:(nil)' registered 00:06:11.546 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:11.546 00:06:11.546 00:06:11.546 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.546 http://cunit.sourceforge.net/ 00:06:11.546 00:06:11.546 00:06:11.546 Suite: components_suite 00:06:11.546 Test: vtophys_malloc_test ...passed 00:06:11.546 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:11.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.546 EAL: Restoring previous memory policy: 4 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was expanded by 4MB 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was shrunk by 4MB 00:06:11.546 EAL: Trying to obtain current memory policy. 00:06:11.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.546 EAL: Restoring previous memory policy: 4 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was expanded by 6MB 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was shrunk by 6MB 00:06:11.546 EAL: Trying to obtain current memory policy. 00:06:11.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.546 EAL: Restoring previous memory policy: 4 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was expanded by 10MB 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was shrunk by 10MB 00:06:11.546 EAL: Trying to obtain current memory policy. 00:06:11.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.546 EAL: Restoring previous memory policy: 4 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was expanded by 18MB 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was shrunk by 18MB 00:06:11.546 EAL: Trying to obtain current memory policy. 00:06:11.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.546 EAL: Restoring previous memory policy: 4 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was expanded by 34MB 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was shrunk by 34MB 00:06:11.546 EAL: Trying to obtain current memory policy. 00:06:11.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.546 EAL: Restoring previous memory policy: 4 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was expanded by 66MB 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was shrunk by 66MB 00:06:11.546 EAL: Trying to obtain current memory policy. 00:06:11.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.546 EAL: Restoring previous memory policy: 4 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.546 EAL: request: mp_malloc_sync 00:06:11.546 EAL: No shared files mode enabled, IPC is disabled 00:06:11.546 EAL: Heap on socket 0 was expanded by 130MB 00:06:11.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.805 EAL: request: mp_malloc_sync 00:06:11.805 EAL: No shared files mode enabled, IPC is disabled 00:06:11.805 EAL: Heap on socket 0 was shrunk by 130MB 00:06:11.805 EAL: Trying to obtain current memory policy. 00:06:11.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.805 EAL: Restoring previous memory policy: 4 00:06:11.805 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.805 EAL: request: mp_malloc_sync 00:06:11.805 EAL: No shared files mode enabled, IPC is disabled 00:06:11.805 EAL: Heap on socket 0 was expanded by 258MB 00:06:11.805 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.805 EAL: request: mp_malloc_sync 00:06:11.805 EAL: No shared files mode enabled, IPC is disabled 00:06:11.805 EAL: Heap on socket 0 was shrunk by 258MB 00:06:11.805 EAL: Trying to obtain current memory policy. 00:06:11.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.064 EAL: Restoring previous memory policy: 4 00:06:12.064 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.064 EAL: request: mp_malloc_sync 00:06:12.064 EAL: No shared files mode enabled, IPC is disabled 00:06:12.064 EAL: Heap on socket 0 was expanded by 514MB 00:06:12.064 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.322 EAL: request: mp_malloc_sync 00:06:12.322 EAL: No shared files mode enabled, IPC is disabled 00:06:12.322 EAL: Heap on socket 0 was shrunk by 514MB 00:06:12.322 EAL: Trying to obtain current memory policy. 00:06:12.322 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.581 EAL: Restoring previous memory policy: 4 00:06:12.581 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.581 EAL: request: mp_malloc_sync 00:06:12.581 EAL: No shared files mode enabled, IPC is disabled 00:06:12.581 EAL: Heap on socket 0 was expanded by 1026MB 00:06:12.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.142 EAL: request: mp_malloc_sync 00:06:13.142 EAL: No shared files mode enabled, IPC is disabled 00:06:13.142 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:13.142 passed 00:06:13.142 00:06:13.142 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.142 suites 1 1 n/a 0 0 00:06:13.142 tests 2 2 2 0 0 00:06:13.142 asserts 5470 5470 5470 0 n/a 00:06:13.142 00:06:13.142 Elapsed time = 1.480 seconds 00:06:13.142 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.142 EAL: request: mp_malloc_sync 00:06:13.142 EAL: No shared files mode enabled, IPC is disabled 00:06:13.142 EAL: Heap on socket 0 was shrunk by 2MB 00:06:13.142 EAL: No shared files mode enabled, IPC is disabled 00:06:13.142 EAL: No shared files mode enabled, IPC is disabled 00:06:13.142 EAL: No shared files mode enabled, IPC is disabled 00:06:13.142 00:06:13.142 real 0m1.708s 00:06:13.142 user 0m0.977s 00:06:13.142 sys 0m0.591s 00:06:13.142 08:39:43 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.142 08:39:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:13.142 ************************************ 00:06:13.142 END TEST env_vtophys 00:06:13.142 ************************************ 00:06:13.142 08:39:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:13.142 08:39:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.142 08:39:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.142 08:39:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.142 ************************************ 00:06:13.142 START TEST env_pci 00:06:13.142 ************************************ 00:06:13.142 08:39:43 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:13.142 00:06:13.142 00:06:13.142 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.142 http://cunit.sourceforge.net/ 00:06:13.142 00:06:13.142 00:06:13.142 Suite: pci 00:06:13.142 Test: pci_hook ...[2024-11-20 08:39:43.833295] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56670 has claimed it 00:06:13.142 passed 00:06:13.142 00:06:13.142 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.142 suites 1 1 n/a 0 0 00:06:13.142 tests 1 1 1 0 0 00:06:13.142 asserts 25 25 25 0 n/a 00:06:13.142 00:06:13.142 Elapsed time = 0.002 seconds 00:06:13.142 EAL: Cannot find device (10000:00:01.0) 00:06:13.142 EAL: Failed to attach device on primary process 00:06:13.142 00:06:13.142 real 0m0.020s 00:06:13.142 user 0m0.007s 00:06:13.142 sys 0m0.012s 00:06:13.142 08:39:43 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.142 ************************************ 00:06:13.142 END TEST env_pci 00:06:13.142 ************************************ 00:06:13.142 08:39:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:13.142 08:39:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:13.142 08:39:43 env -- env/env.sh@15 -- # uname 00:06:13.142 08:39:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:13.142 08:39:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:13.142 08:39:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:13.142 08:39:43 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:13.142 08:39:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.142 08:39:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.142 ************************************ 00:06:13.142 START TEST env_dpdk_post_init 00:06:13.142 ************************************ 00:06:13.142 08:39:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:13.142 EAL: Detected CPU lcores: 10 00:06:13.142 EAL: Detected NUMA nodes: 1 00:06:13.142 EAL: Detected shared linkage of DPDK 00:06:13.142 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:13.142 EAL: Selected IOVA mode 'PA' 00:06:13.401 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:13.401 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:13.401 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:13.401 Starting DPDK initialization... 00:06:13.401 Starting SPDK post initialization... 00:06:13.401 SPDK NVMe probe 00:06:13.401 Attaching to 0000:00:10.0 00:06:13.401 Attaching to 0000:00:11.0 00:06:13.401 Attached to 0000:00:10.0 00:06:13.401 Attached to 0000:00:11.0 00:06:13.401 Cleaning up... 00:06:13.401 00:06:13.401 real 0m0.203s 00:06:13.401 user 0m0.057s 00:06:13.401 sys 0m0.046s 00:06:13.401 08:39:44 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.401 ************************************ 00:06:13.401 END TEST env_dpdk_post_init 00:06:13.401 ************************************ 00:06:13.401 08:39:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:13.401 08:39:44 env -- env/env.sh@26 -- # uname 00:06:13.401 08:39:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:13.401 08:39:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:13.401 08:39:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.401 08:39:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.401 08:39:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.402 ************************************ 00:06:13.402 START TEST env_mem_callbacks 00:06:13.402 ************************************ 00:06:13.402 08:39:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:13.402 EAL: Detected CPU lcores: 10 00:06:13.402 EAL: Detected NUMA nodes: 1 00:06:13.402 EAL: Detected shared linkage of DPDK 00:06:13.402 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:13.402 EAL: Selected IOVA mode 'PA' 00:06:13.402 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:13.402 00:06:13.402 00:06:13.402 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.402 http://cunit.sourceforge.net/ 00:06:13.402 00:06:13.402 00:06:13.402 Suite: memory 00:06:13.402 Test: test ... 00:06:13.402 register 0x200000200000 2097152 00:06:13.402 malloc 3145728 00:06:13.402 register 0x200000400000 4194304 00:06:13.402 buf 0x200000500000 len 3145728 PASSED 00:06:13.402 malloc 64 00:06:13.402 buf 0x2000004fff40 len 64 PASSED 00:06:13.402 malloc 4194304 00:06:13.402 register 0x200000800000 6291456 00:06:13.402 buf 0x200000a00000 len 4194304 PASSED 00:06:13.402 free 0x200000500000 3145728 00:06:13.402 free 0x2000004fff40 64 00:06:13.402 unregister 0x200000400000 4194304 PASSED 00:06:13.402 free 0x200000a00000 4194304 00:06:13.402 unregister 0x200000800000 6291456 PASSED 00:06:13.402 malloc 8388608 00:06:13.402 register 0x200000400000 10485760 00:06:13.402 buf 0x200000600000 len 8388608 PASSED 00:06:13.402 free 0x200000600000 8388608 00:06:13.402 unregister 0x200000400000 10485760 PASSED 00:06:13.402 passed 00:06:13.402 00:06:13.402 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.402 suites 1 1 n/a 0 0 00:06:13.402 tests 1 1 1 0 0 00:06:13.402 asserts 15 15 15 0 n/a 00:06:13.402 00:06:13.402 Elapsed time = 0.009 seconds 00:06:13.661 00:06:13.661 real 0m0.146s 00:06:13.661 user 0m0.021s 00:06:13.661 sys 0m0.025s 00:06:13.661 08:39:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.661 08:39:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:13.661 ************************************ 00:06:13.661 END TEST env_mem_callbacks 00:06:13.661 ************************************ 00:06:13.661 00:06:13.661 real 0m2.806s 00:06:13.661 user 0m1.487s 00:06:13.661 sys 0m0.967s 00:06:13.661 08:39:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.661 08:39:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.661 ************************************ 00:06:13.661 END TEST env 00:06:13.661 ************************************ 00:06:13.661 08:39:44 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:13.661 08:39:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.661 08:39:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.661 08:39:44 -- common/autotest_common.sh@10 -- # set +x 00:06:13.661 ************************************ 00:06:13.661 START TEST rpc 00:06:13.661 ************************************ 00:06:13.661 08:39:44 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:13.661 * Looking for test storage... 00:06:13.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:13.661 08:39:44 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.661 08:39:44 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.661 08:39:44 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.920 08:39:44 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.920 08:39:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.920 08:39:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.920 08:39:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.920 08:39:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.920 08:39:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.920 08:39:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.920 08:39:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.920 08:39:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.920 08:39:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.920 08:39:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.920 08:39:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.920 08:39:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:13.920 08:39:44 rpc -- scripts/common.sh@345 -- # : 1 00:06:13.920 08:39:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.920 08:39:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.920 08:39:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:13.920 08:39:44 rpc -- scripts/common.sh@353 -- # local d=1 00:06:13.920 08:39:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.920 08:39:44 rpc -- scripts/common.sh@355 -- # echo 1 00:06:13.920 08:39:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.920 08:39:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:13.920 08:39:44 rpc -- scripts/common.sh@353 -- # local d=2 00:06:13.920 08:39:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.920 08:39:44 rpc -- scripts/common.sh@355 -- # echo 2 00:06:13.920 08:39:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.920 08:39:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.921 08:39:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.921 08:39:44 rpc -- scripts/common.sh@368 -- # return 0 00:06:13.921 08:39:44 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.921 08:39:44 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.921 --rc genhtml_branch_coverage=1 00:06:13.921 --rc genhtml_function_coverage=1 00:06:13.921 --rc genhtml_legend=1 00:06:13.921 --rc geninfo_all_blocks=1 00:06:13.921 --rc geninfo_unexecuted_blocks=1 00:06:13.921 00:06:13.921 ' 00:06:13.921 08:39:44 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.921 --rc genhtml_branch_coverage=1 00:06:13.921 --rc genhtml_function_coverage=1 00:06:13.921 --rc genhtml_legend=1 00:06:13.921 --rc geninfo_all_blocks=1 00:06:13.921 --rc geninfo_unexecuted_blocks=1 00:06:13.921 00:06:13.921 ' 00:06:13.921 08:39:44 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.921 --rc genhtml_branch_coverage=1 00:06:13.921 --rc genhtml_function_coverage=1 00:06:13.921 --rc genhtml_legend=1 00:06:13.921 --rc geninfo_all_blocks=1 00:06:13.921 --rc geninfo_unexecuted_blocks=1 00:06:13.921 00:06:13.921 ' 00:06:13.921 08:39:44 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.921 --rc genhtml_branch_coverage=1 00:06:13.921 --rc genhtml_function_coverage=1 00:06:13.921 --rc genhtml_legend=1 00:06:13.921 --rc geninfo_all_blocks=1 00:06:13.921 --rc geninfo_unexecuted_blocks=1 00:06:13.921 00:06:13.921 ' 00:06:13.921 08:39:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56788 00:06:13.921 08:39:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.921 08:39:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:13.921 08:39:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56788 00:06:13.921 08:39:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 56788 ']' 00:06:13.921 08:39:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.921 08:39:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.921 08:39:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.921 08:39:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.921 08:39:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.921 [2024-11-20 08:39:44.685960] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:13.921 [2024-11-20 08:39:44.686068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56788 ] 00:06:14.180 [2024-11-20 08:39:44.840245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.180 [2024-11-20 08:39:44.944732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:14.180 [2024-11-20 08:39:44.944853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56788' to capture a snapshot of events at runtime. 00:06:14.180 [2024-11-20 08:39:44.944879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.180 [2024-11-20 08:39:44.944897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.180 [2024-11-20 08:39:44.944913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56788 for offline analysis/debug. 00:06:14.180 [2024-11-20 08:39:44.945532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.180 [2024-11-20 08:39:45.033343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.116 08:39:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.116 08:39:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:15.116 08:39:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:15.116 08:39:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:15.116 08:39:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:15.117 08:39:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:15.117 08:39:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.117 08:39:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.117 08:39:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.117 ************************************ 00:06:15.117 START TEST rpc_integrity 00:06:15.117 ************************************ 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:15.117 { 00:06:15.117 "name": "Malloc0", 00:06:15.117 "aliases": [ 00:06:15.117 "0aa7f6d5-8c56-472e-b879-e788880e0b67" 00:06:15.117 ], 00:06:15.117 "product_name": "Malloc disk", 00:06:15.117 "block_size": 512, 00:06:15.117 "num_blocks": 16384, 00:06:15.117 "uuid": "0aa7f6d5-8c56-472e-b879-e788880e0b67", 00:06:15.117 "assigned_rate_limits": { 00:06:15.117 "rw_ios_per_sec": 0, 00:06:15.117 "rw_mbytes_per_sec": 0, 00:06:15.117 "r_mbytes_per_sec": 0, 00:06:15.117 "w_mbytes_per_sec": 0 00:06:15.117 }, 00:06:15.117 "claimed": false, 00:06:15.117 "zoned": false, 00:06:15.117 "supported_io_types": { 00:06:15.117 "read": true, 00:06:15.117 "write": true, 00:06:15.117 "unmap": true, 00:06:15.117 "flush": true, 00:06:15.117 "reset": true, 00:06:15.117 "nvme_admin": false, 00:06:15.117 "nvme_io": false, 00:06:15.117 "nvme_io_md": false, 00:06:15.117 "write_zeroes": true, 00:06:15.117 "zcopy": true, 00:06:15.117 "get_zone_info": false, 00:06:15.117 "zone_management": false, 00:06:15.117 "zone_append": false, 00:06:15.117 "compare": false, 00:06:15.117 "compare_and_write": false, 00:06:15.117 "abort": true, 00:06:15.117 "seek_hole": false, 00:06:15.117 "seek_data": false, 00:06:15.117 "copy": true, 00:06:15.117 "nvme_iov_md": false 00:06:15.117 }, 00:06:15.117 "memory_domains": [ 00:06:15.117 { 00:06:15.117 "dma_device_id": "system", 00:06:15.117 "dma_device_type": 1 00:06:15.117 }, 00:06:15.117 { 00:06:15.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.117 "dma_device_type": 2 00:06:15.117 } 00:06:15.117 ], 00:06:15.117 "driver_specific": {} 00:06:15.117 } 00:06:15.117 ]' 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.117 [2024-11-20 08:39:45.932224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:15.117 [2024-11-20 08:39:45.932294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.117 [2024-11-20 08:39:45.932324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x128bf20 00:06:15.117 [2024-11-20 08:39:45.932336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.117 [2024-11-20 08:39:45.934214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.117 [2024-11-20 08:39:45.934258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:15.117 Passthru0 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.117 08:39:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.117 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:15.117 { 00:06:15.117 "name": "Malloc0", 00:06:15.117 "aliases": [ 00:06:15.117 "0aa7f6d5-8c56-472e-b879-e788880e0b67" 00:06:15.117 ], 00:06:15.117 "product_name": "Malloc disk", 00:06:15.117 "block_size": 512, 00:06:15.117 "num_blocks": 16384, 00:06:15.117 "uuid": "0aa7f6d5-8c56-472e-b879-e788880e0b67", 00:06:15.117 "assigned_rate_limits": { 00:06:15.117 "rw_ios_per_sec": 0, 00:06:15.117 "rw_mbytes_per_sec": 0, 00:06:15.117 "r_mbytes_per_sec": 0, 00:06:15.117 "w_mbytes_per_sec": 0 00:06:15.117 }, 00:06:15.117 "claimed": true, 00:06:15.117 "claim_type": "exclusive_write", 00:06:15.117 "zoned": false, 00:06:15.117 "supported_io_types": { 00:06:15.117 "read": true, 00:06:15.117 "write": true, 00:06:15.117 "unmap": true, 00:06:15.117 "flush": true, 00:06:15.117 "reset": true, 00:06:15.117 "nvme_admin": false, 00:06:15.117 "nvme_io": false, 00:06:15.117 "nvme_io_md": false, 00:06:15.117 "write_zeroes": true, 00:06:15.117 "zcopy": true, 00:06:15.117 "get_zone_info": false, 00:06:15.117 "zone_management": false, 00:06:15.117 "zone_append": false, 00:06:15.117 "compare": false, 00:06:15.117 "compare_and_write": false, 00:06:15.117 "abort": true, 00:06:15.117 "seek_hole": false, 00:06:15.117 "seek_data": false, 00:06:15.117 "copy": true, 00:06:15.117 "nvme_iov_md": false 00:06:15.117 }, 00:06:15.117 "memory_domains": [ 00:06:15.117 { 00:06:15.117 "dma_device_id": "system", 00:06:15.117 "dma_device_type": 1 00:06:15.117 }, 00:06:15.117 { 00:06:15.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.117 "dma_device_type": 2 00:06:15.117 } 00:06:15.117 ], 00:06:15.117 "driver_specific": {} 00:06:15.117 }, 00:06:15.117 { 00:06:15.117 "name": "Passthru0", 00:06:15.117 "aliases": [ 00:06:15.117 "4c3d4985-b9c2-59ba-854f-13404dcca26e" 00:06:15.117 ], 00:06:15.117 "product_name": "passthru", 00:06:15.117 "block_size": 512, 00:06:15.117 "num_blocks": 16384, 00:06:15.117 "uuid": "4c3d4985-b9c2-59ba-854f-13404dcca26e", 00:06:15.117 "assigned_rate_limits": { 00:06:15.117 "rw_ios_per_sec": 0, 00:06:15.117 "rw_mbytes_per_sec": 0, 00:06:15.117 "r_mbytes_per_sec": 0, 00:06:15.117 "w_mbytes_per_sec": 0 00:06:15.117 }, 00:06:15.117 "claimed": false, 00:06:15.117 "zoned": false, 00:06:15.117 "supported_io_types": { 00:06:15.117 "read": true, 00:06:15.117 "write": true, 00:06:15.117 "unmap": true, 00:06:15.117 "flush": true, 00:06:15.117 "reset": true, 00:06:15.117 "nvme_admin": false, 00:06:15.117 "nvme_io": false, 00:06:15.117 "nvme_io_md": false, 00:06:15.117 "write_zeroes": true, 00:06:15.117 "zcopy": true, 00:06:15.117 "get_zone_info": false, 00:06:15.117 "zone_management": false, 00:06:15.117 "zone_append": false, 00:06:15.117 "compare": false, 00:06:15.117 "compare_and_write": false, 00:06:15.117 "abort": true, 00:06:15.117 "seek_hole": false, 00:06:15.118 "seek_data": false, 00:06:15.118 "copy": true, 00:06:15.118 "nvme_iov_md": false 00:06:15.118 }, 00:06:15.118 "memory_domains": [ 00:06:15.118 { 00:06:15.118 "dma_device_id": "system", 00:06:15.118 "dma_device_type": 1 00:06:15.118 }, 00:06:15.118 { 00:06:15.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.118 "dma_device_type": 2 00:06:15.118 } 00:06:15.118 ], 00:06:15.118 "driver_specific": { 00:06:15.118 "passthru": { 00:06:15.118 "name": "Passthru0", 00:06:15.118 "base_bdev_name": "Malloc0" 00:06:15.118 } 00:06:15.118 } 00:06:15.118 } 00:06:15.118 ]' 00:06:15.118 08:39:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:15.118 08:39:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:15.118 08:39:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:15.118 08:39:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.118 08:39:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.376 08:39:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.376 08:39:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:15.376 08:39:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.376 08:39:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.376 08:39:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.376 08:39:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:15.376 08:39:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.376 08:39:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.376 08:39:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.376 08:39:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:15.376 08:39:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:15.376 08:39:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:15.376 00:06:15.376 real 0m0.328s 00:06:15.376 user 0m0.211s 00:06:15.376 sys 0m0.046s 00:06:15.376 ************************************ 00:06:15.376 END TEST rpc_integrity 00:06:15.376 ************************************ 00:06:15.376 08:39:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.376 08:39:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.376 08:39:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:15.376 08:39:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.376 08:39:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.376 08:39:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.377 ************************************ 00:06:15.377 START TEST rpc_plugins 00:06:15.377 ************************************ 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:15.377 08:39:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.377 08:39:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:15.377 08:39:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.377 08:39:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:15.377 { 00:06:15.377 "name": "Malloc1", 00:06:15.377 "aliases": [ 00:06:15.377 "5f4d1cde-49bc-4eeb-ade4-a18c2e97b61e" 00:06:15.377 ], 00:06:15.377 "product_name": "Malloc disk", 00:06:15.377 "block_size": 4096, 00:06:15.377 "num_blocks": 256, 00:06:15.377 "uuid": "5f4d1cde-49bc-4eeb-ade4-a18c2e97b61e", 00:06:15.377 "assigned_rate_limits": { 00:06:15.377 "rw_ios_per_sec": 0, 00:06:15.377 "rw_mbytes_per_sec": 0, 00:06:15.377 "r_mbytes_per_sec": 0, 00:06:15.377 "w_mbytes_per_sec": 0 00:06:15.377 }, 00:06:15.377 "claimed": false, 00:06:15.377 "zoned": false, 00:06:15.377 "supported_io_types": { 00:06:15.377 "read": true, 00:06:15.377 "write": true, 00:06:15.377 "unmap": true, 00:06:15.377 "flush": true, 00:06:15.377 "reset": true, 00:06:15.377 "nvme_admin": false, 00:06:15.377 "nvme_io": false, 00:06:15.377 "nvme_io_md": false, 00:06:15.377 "write_zeroes": true, 00:06:15.377 "zcopy": true, 00:06:15.377 "get_zone_info": false, 00:06:15.377 "zone_management": false, 00:06:15.377 "zone_append": false, 00:06:15.377 "compare": false, 00:06:15.377 "compare_and_write": false, 00:06:15.377 "abort": true, 00:06:15.377 "seek_hole": false, 00:06:15.377 "seek_data": false, 00:06:15.377 "copy": true, 00:06:15.377 "nvme_iov_md": false 00:06:15.377 }, 00:06:15.377 "memory_domains": [ 00:06:15.377 { 00:06:15.377 "dma_device_id": "system", 00:06:15.377 "dma_device_type": 1 00:06:15.377 }, 00:06:15.377 { 00:06:15.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.377 "dma_device_type": 2 00:06:15.377 } 00:06:15.377 ], 00:06:15.377 "driver_specific": {} 00:06:15.377 } 00:06:15.377 ]' 00:06:15.377 08:39:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:15.377 08:39:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:15.377 08:39:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.377 08:39:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.377 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.377 08:39:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:15.377 08:39:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:15.636 08:39:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:15.636 00:06:15.636 real 0m0.172s 00:06:15.636 user 0m0.115s 00:06:15.636 sys 0m0.018s 00:06:15.636 ************************************ 00:06:15.636 END TEST rpc_plugins 00:06:15.636 ************************************ 00:06:15.636 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.636 08:39:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.636 08:39:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:15.636 08:39:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.636 08:39:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.636 08:39:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.636 ************************************ 00:06:15.636 START TEST rpc_trace_cmd_test 00:06:15.636 ************************************ 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:15.636 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56788", 00:06:15.636 "tpoint_group_mask": "0x8", 00:06:15.636 "iscsi_conn": { 00:06:15.636 "mask": "0x2", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "scsi": { 00:06:15.636 "mask": "0x4", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "bdev": { 00:06:15.636 "mask": "0x8", 00:06:15.636 "tpoint_mask": "0xffffffffffffffff" 00:06:15.636 }, 00:06:15.636 "nvmf_rdma": { 00:06:15.636 "mask": "0x10", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "nvmf_tcp": { 00:06:15.636 "mask": "0x20", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "ftl": { 00:06:15.636 "mask": "0x40", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "blobfs": { 00:06:15.636 "mask": "0x80", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "dsa": { 00:06:15.636 "mask": "0x200", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "thread": { 00:06:15.636 "mask": "0x400", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "nvme_pcie": { 00:06:15.636 "mask": "0x800", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "iaa": { 00:06:15.636 "mask": "0x1000", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "nvme_tcp": { 00:06:15.636 "mask": "0x2000", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "bdev_nvme": { 00:06:15.636 "mask": "0x4000", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "sock": { 00:06:15.636 "mask": "0x8000", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "blob": { 00:06:15.636 "mask": "0x10000", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "bdev_raid": { 00:06:15.636 "mask": "0x20000", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 }, 00:06:15.636 "scheduler": { 00:06:15.636 "mask": "0x40000", 00:06:15.636 "tpoint_mask": "0x0" 00:06:15.636 } 00:06:15.636 }' 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:15.636 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:15.895 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:15.895 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:15.895 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:15.895 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:15.895 08:39:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:15.895 00:06:15.895 real 0m0.280s 00:06:15.895 user 0m0.236s 00:06:15.895 sys 0m0.035s 00:06:15.895 08:39:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.895 08:39:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.895 ************************************ 00:06:15.895 END TEST rpc_trace_cmd_test 00:06:15.895 ************************************ 00:06:15.895 08:39:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:15.895 08:39:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:15.895 08:39:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:15.895 08:39:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.895 08:39:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.895 08:39:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.895 ************************************ 00:06:15.895 START TEST rpc_daemon_integrity 00:06:15.895 ************************************ 00:06:15.895 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:15.895 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:15.895 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.895 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.895 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.896 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:15.896 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:15.896 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:15.896 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:15.896 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.896 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.896 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.896 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:15.896 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:15.896 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.896 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:16.155 { 00:06:16.155 "name": "Malloc2", 00:06:16.155 "aliases": [ 00:06:16.155 "c65a0ad2-b81d-4933-9916-ec5be2681e45" 00:06:16.155 ], 00:06:16.155 "product_name": "Malloc disk", 00:06:16.155 "block_size": 512, 00:06:16.155 "num_blocks": 16384, 00:06:16.155 "uuid": "c65a0ad2-b81d-4933-9916-ec5be2681e45", 00:06:16.155 "assigned_rate_limits": { 00:06:16.155 "rw_ios_per_sec": 0, 00:06:16.155 "rw_mbytes_per_sec": 0, 00:06:16.155 "r_mbytes_per_sec": 0, 00:06:16.155 "w_mbytes_per_sec": 0 00:06:16.155 }, 00:06:16.155 "claimed": false, 00:06:16.155 "zoned": false, 00:06:16.155 "supported_io_types": { 00:06:16.155 "read": true, 00:06:16.155 "write": true, 00:06:16.155 "unmap": true, 00:06:16.155 "flush": true, 00:06:16.155 "reset": true, 00:06:16.155 "nvme_admin": false, 00:06:16.155 "nvme_io": false, 00:06:16.155 "nvme_io_md": false, 00:06:16.155 "write_zeroes": true, 00:06:16.155 "zcopy": true, 00:06:16.155 "get_zone_info": false, 00:06:16.155 "zone_management": false, 00:06:16.155 "zone_append": false, 00:06:16.155 "compare": false, 00:06:16.155 "compare_and_write": false, 00:06:16.155 "abort": true, 00:06:16.155 "seek_hole": false, 00:06:16.155 "seek_data": false, 00:06:16.155 "copy": true, 00:06:16.155 "nvme_iov_md": false 00:06:16.155 }, 00:06:16.155 "memory_domains": [ 00:06:16.155 { 00:06:16.155 "dma_device_id": "system", 00:06:16.155 "dma_device_type": 1 00:06:16.155 }, 00:06:16.155 { 00:06:16.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.155 "dma_device_type": 2 00:06:16.155 } 00:06:16.155 ], 00:06:16.155 "driver_specific": {} 00:06:16.155 } 00:06:16.155 ]' 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.155 [2024-11-20 08:39:46.867186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:16.155 [2024-11-20 08:39:46.867260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.155 [2024-11-20 08:39:46.867282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x137f790 00:06:16.155 [2024-11-20 08:39:46.867294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.155 [2024-11-20 08:39:46.869326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.155 [2024-11-20 08:39:46.869363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:16.155 Passthru0 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.155 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:16.155 { 00:06:16.155 "name": "Malloc2", 00:06:16.155 "aliases": [ 00:06:16.155 "c65a0ad2-b81d-4933-9916-ec5be2681e45" 00:06:16.155 ], 00:06:16.155 "product_name": "Malloc disk", 00:06:16.155 "block_size": 512, 00:06:16.155 "num_blocks": 16384, 00:06:16.155 "uuid": "c65a0ad2-b81d-4933-9916-ec5be2681e45", 00:06:16.155 "assigned_rate_limits": { 00:06:16.155 "rw_ios_per_sec": 0, 00:06:16.155 "rw_mbytes_per_sec": 0, 00:06:16.155 "r_mbytes_per_sec": 0, 00:06:16.155 "w_mbytes_per_sec": 0 00:06:16.155 }, 00:06:16.155 "claimed": true, 00:06:16.155 "claim_type": "exclusive_write", 00:06:16.155 "zoned": false, 00:06:16.155 "supported_io_types": { 00:06:16.155 "read": true, 00:06:16.155 "write": true, 00:06:16.155 "unmap": true, 00:06:16.155 "flush": true, 00:06:16.155 "reset": true, 00:06:16.155 "nvme_admin": false, 00:06:16.155 "nvme_io": false, 00:06:16.155 "nvme_io_md": false, 00:06:16.155 "write_zeroes": true, 00:06:16.155 "zcopy": true, 00:06:16.155 "get_zone_info": false, 00:06:16.155 "zone_management": false, 00:06:16.155 "zone_append": false, 00:06:16.155 "compare": false, 00:06:16.155 "compare_and_write": false, 00:06:16.155 "abort": true, 00:06:16.155 "seek_hole": false, 00:06:16.155 "seek_data": false, 00:06:16.155 "copy": true, 00:06:16.155 "nvme_iov_md": false 00:06:16.155 }, 00:06:16.155 "memory_domains": [ 00:06:16.155 { 00:06:16.155 "dma_device_id": "system", 00:06:16.155 "dma_device_type": 1 00:06:16.155 }, 00:06:16.155 { 00:06:16.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.155 "dma_device_type": 2 00:06:16.155 } 00:06:16.155 ], 00:06:16.155 "driver_specific": {} 00:06:16.155 }, 00:06:16.155 { 00:06:16.155 "name": "Passthru0", 00:06:16.155 "aliases": [ 00:06:16.155 "223d9d2d-1a01-52f3-a67f-4a61c26b4e3f" 00:06:16.155 ], 00:06:16.155 "product_name": "passthru", 00:06:16.155 "block_size": 512, 00:06:16.155 "num_blocks": 16384, 00:06:16.155 "uuid": "223d9d2d-1a01-52f3-a67f-4a61c26b4e3f", 00:06:16.155 "assigned_rate_limits": { 00:06:16.155 "rw_ios_per_sec": 0, 00:06:16.155 "rw_mbytes_per_sec": 0, 00:06:16.155 "r_mbytes_per_sec": 0, 00:06:16.155 "w_mbytes_per_sec": 0 00:06:16.155 }, 00:06:16.155 "claimed": false, 00:06:16.155 "zoned": false, 00:06:16.155 "supported_io_types": { 00:06:16.155 "read": true, 00:06:16.155 "write": true, 00:06:16.155 "unmap": true, 00:06:16.156 "flush": true, 00:06:16.156 "reset": true, 00:06:16.156 "nvme_admin": false, 00:06:16.156 "nvme_io": false, 00:06:16.156 "nvme_io_md": false, 00:06:16.156 "write_zeroes": true, 00:06:16.156 "zcopy": true, 00:06:16.156 "get_zone_info": false, 00:06:16.156 "zone_management": false, 00:06:16.156 "zone_append": false, 00:06:16.156 "compare": false, 00:06:16.156 "compare_and_write": false, 00:06:16.156 "abort": true, 00:06:16.156 "seek_hole": false, 00:06:16.156 "seek_data": false, 00:06:16.156 "copy": true, 00:06:16.156 "nvme_iov_md": false 00:06:16.156 }, 00:06:16.156 "memory_domains": [ 00:06:16.156 { 00:06:16.156 "dma_device_id": "system", 00:06:16.156 "dma_device_type": 1 00:06:16.156 }, 00:06:16.156 { 00:06:16.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.156 "dma_device_type": 2 00:06:16.156 } 00:06:16.156 ], 00:06:16.156 "driver_specific": { 00:06:16.156 "passthru": { 00:06:16.156 "name": "Passthru0", 00:06:16.156 "base_bdev_name": "Malloc2" 00:06:16.156 } 00:06:16.156 } 00:06:16.156 } 00:06:16.156 ]' 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:16.156 08:39:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:16.156 08:39:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:16.156 00:06:16.156 real 0m0.320s 00:06:16.156 user 0m0.218s 00:06:16.156 sys 0m0.037s 00:06:16.156 ************************************ 00:06:16.156 END TEST rpc_daemon_integrity 00:06:16.156 08:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.156 08:39:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.156 ************************************ 00:06:16.414 08:39:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:16.414 08:39:47 rpc -- rpc/rpc.sh@84 -- # killprocess 56788 00:06:16.414 08:39:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 56788 ']' 00:06:16.414 08:39:47 rpc -- common/autotest_common.sh@958 -- # kill -0 56788 00:06:16.414 08:39:47 rpc -- common/autotest_common.sh@959 -- # uname 00:06:16.414 08:39:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.414 08:39:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56788 00:06:16.414 08:39:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.414 killing process with pid 56788 00:06:16.414 08:39:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.414 08:39:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56788' 00:06:16.414 08:39:47 rpc -- common/autotest_common.sh@973 -- # kill 56788 00:06:16.414 08:39:47 rpc -- common/autotest_common.sh@978 -- # wait 56788 00:06:16.982 00:06:16.982 real 0m3.296s 00:06:16.982 user 0m4.174s 00:06:16.982 sys 0m0.797s 00:06:16.982 08:39:47 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.982 ************************************ 00:06:16.982 END TEST rpc 00:06:16.982 ************************************ 00:06:16.982 08:39:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.982 08:39:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:16.982 08:39:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.982 08:39:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.982 08:39:47 -- common/autotest_common.sh@10 -- # set +x 00:06:16.982 ************************************ 00:06:16.982 START TEST skip_rpc 00:06:16.982 ************************************ 00:06:16.982 08:39:47 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:16.982 * Looking for test storage... 00:06:16.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:16.982 08:39:47 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.982 08:39:47 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.982 08:39:47 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.241 08:39:47 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.241 08:39:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:17.241 08:39:47 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.241 08:39:47 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.241 --rc genhtml_branch_coverage=1 00:06:17.241 --rc genhtml_function_coverage=1 00:06:17.241 --rc genhtml_legend=1 00:06:17.241 --rc geninfo_all_blocks=1 00:06:17.241 --rc geninfo_unexecuted_blocks=1 00:06:17.241 00:06:17.241 ' 00:06:17.241 08:39:47 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.241 --rc genhtml_branch_coverage=1 00:06:17.241 --rc genhtml_function_coverage=1 00:06:17.241 --rc genhtml_legend=1 00:06:17.241 --rc geninfo_all_blocks=1 00:06:17.241 --rc geninfo_unexecuted_blocks=1 00:06:17.241 00:06:17.241 ' 00:06:17.241 08:39:47 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.241 --rc genhtml_branch_coverage=1 00:06:17.241 --rc genhtml_function_coverage=1 00:06:17.241 --rc genhtml_legend=1 00:06:17.241 --rc geninfo_all_blocks=1 00:06:17.241 --rc geninfo_unexecuted_blocks=1 00:06:17.241 00:06:17.241 ' 00:06:17.241 08:39:47 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.241 --rc genhtml_branch_coverage=1 00:06:17.241 --rc genhtml_function_coverage=1 00:06:17.241 --rc genhtml_legend=1 00:06:17.241 --rc geninfo_all_blocks=1 00:06:17.241 --rc geninfo_unexecuted_blocks=1 00:06:17.241 00:06:17.241 ' 00:06:17.241 08:39:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:17.241 08:39:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:17.241 08:39:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:17.241 08:39:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.242 08:39:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.242 08:39:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.242 ************************************ 00:06:17.242 START TEST skip_rpc 00:06:17.242 ************************************ 00:06:17.242 08:39:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:17.242 08:39:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56999 00:06:17.242 08:39:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.242 08:39:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:17.242 08:39:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:17.242 [2024-11-20 08:39:48.043074] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:17.242 [2024-11-20 08:39:48.043793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56999 ] 00:06:17.502 [2024-11-20 08:39:48.198717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.502 [2024-11-20 08:39:48.289613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.502 [2024-11-20 08:39:48.402160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56999 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56999 ']' 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56999 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.773 08:39:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56999 00:06:22.773 08:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.773 killing process with pid 56999 00:06:22.773 08:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.773 08:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56999' 00:06:22.773 08:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56999 00:06:22.773 08:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56999 00:06:22.773 00:06:22.773 real 0m5.600s 00:06:22.773 user 0m5.088s 00:06:22.773 sys 0m0.408s 00:06:22.773 ************************************ 00:06:22.773 END TEST skip_rpc 00:06:22.773 08:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.773 08:39:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.773 ************************************ 00:06:22.773 08:39:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:22.773 08:39:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.773 08:39:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.773 08:39:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.773 ************************************ 00:06:22.773 START TEST skip_rpc_with_json 00:06:22.773 ************************************ 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57086 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57086 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57086 ']' 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.773 08:39:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:23.031 [2024-11-20 08:39:53.694477] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:23.031 [2024-11-20 08:39:53.694617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57086 ] 00:06:23.031 [2024-11-20 08:39:53.845946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.031 [2024-11-20 08:39:53.933892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.289 [2024-11-20 08:39:54.044125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:23.548 [2024-11-20 08:39:54.323836] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:23.548 request: 00:06:23.548 { 00:06:23.548 "trtype": "tcp", 00:06:23.548 "method": "nvmf_get_transports", 00:06:23.548 "req_id": 1 00:06:23.548 } 00:06:23.548 Got JSON-RPC error response 00:06:23.548 response: 00:06:23.548 { 00:06:23.548 "code": -19, 00:06:23.548 "message": "No such device" 00:06:23.548 } 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:23.548 [2024-11-20 08:39:54.335922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.548 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:23.808 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.808 08:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:23.808 { 00:06:23.808 "subsystems": [ 00:06:23.808 { 00:06:23.808 "subsystem": "fsdev", 00:06:23.808 "config": [ 00:06:23.808 { 00:06:23.808 "method": "fsdev_set_opts", 00:06:23.808 "params": { 00:06:23.808 "fsdev_io_pool_size": 65535, 00:06:23.808 "fsdev_io_cache_size": 256 00:06:23.808 } 00:06:23.808 } 00:06:23.808 ] 00:06:23.808 }, 00:06:23.808 { 00:06:23.808 "subsystem": "keyring", 00:06:23.808 "config": [] 00:06:23.808 }, 00:06:23.808 { 00:06:23.808 "subsystem": "iobuf", 00:06:23.808 "config": [ 00:06:23.808 { 00:06:23.808 "method": "iobuf_set_options", 00:06:23.808 "params": { 00:06:23.808 "small_pool_count": 8192, 00:06:23.808 "large_pool_count": 1024, 00:06:23.808 "small_bufsize": 8192, 00:06:23.808 "large_bufsize": 135168, 00:06:23.808 "enable_numa": false 00:06:23.808 } 00:06:23.808 } 00:06:23.808 ] 00:06:23.808 }, 00:06:23.808 { 00:06:23.808 "subsystem": "sock", 00:06:23.808 "config": [ 00:06:23.808 { 00:06:23.808 "method": "sock_set_default_impl", 00:06:23.808 "params": { 00:06:23.808 "impl_name": "uring" 00:06:23.808 } 00:06:23.808 }, 00:06:23.808 { 00:06:23.808 "method": "sock_impl_set_options", 00:06:23.808 "params": { 00:06:23.808 "impl_name": "ssl", 00:06:23.808 "recv_buf_size": 4096, 00:06:23.808 "send_buf_size": 4096, 00:06:23.808 "enable_recv_pipe": true, 00:06:23.808 "enable_quickack": false, 00:06:23.808 "enable_placement_id": 0, 00:06:23.808 "enable_zerocopy_send_server": true, 00:06:23.808 "enable_zerocopy_send_client": false, 00:06:23.808 "zerocopy_threshold": 0, 00:06:23.808 "tls_version": 0, 00:06:23.808 "enable_ktls": false 00:06:23.808 } 00:06:23.808 }, 00:06:23.808 { 00:06:23.808 "method": "sock_impl_set_options", 00:06:23.808 "params": { 00:06:23.808 "impl_name": "posix", 00:06:23.808 "recv_buf_size": 2097152, 00:06:23.808 "send_buf_size": 2097152, 00:06:23.808 "enable_recv_pipe": true, 00:06:23.808 "enable_quickack": false, 00:06:23.808 "enable_placement_id": 0, 00:06:23.808 "enable_zerocopy_send_server": true, 00:06:23.808 "enable_zerocopy_send_client": false, 00:06:23.808 "zerocopy_threshold": 0, 00:06:23.808 "tls_version": 0, 00:06:23.808 "enable_ktls": false 00:06:23.808 } 00:06:23.808 }, 00:06:23.808 { 00:06:23.808 "method": "sock_impl_set_options", 00:06:23.809 "params": { 00:06:23.809 "impl_name": "uring", 00:06:23.809 "recv_buf_size": 2097152, 00:06:23.809 "send_buf_size": 2097152, 00:06:23.809 "enable_recv_pipe": true, 00:06:23.809 "enable_quickack": false, 00:06:23.809 "enable_placement_id": 0, 00:06:23.809 "enable_zerocopy_send_server": false, 00:06:23.809 "enable_zerocopy_send_client": false, 00:06:23.809 "zerocopy_threshold": 0, 00:06:23.809 "tls_version": 0, 00:06:23.809 "enable_ktls": false 00:06:23.809 } 00:06:23.809 } 00:06:23.809 ] 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "subsystem": "vmd", 00:06:23.809 "config": [] 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "subsystem": "accel", 00:06:23.809 "config": [ 00:06:23.809 { 00:06:23.809 "method": "accel_set_options", 00:06:23.809 "params": { 00:06:23.809 "small_cache_size": 128, 00:06:23.809 "large_cache_size": 16, 00:06:23.809 "task_count": 2048, 00:06:23.809 "sequence_count": 2048, 00:06:23.809 "buf_count": 2048 00:06:23.809 } 00:06:23.809 } 00:06:23.809 ] 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "subsystem": "bdev", 00:06:23.809 "config": [ 00:06:23.809 { 00:06:23.809 "method": "bdev_set_options", 00:06:23.809 "params": { 00:06:23.809 "bdev_io_pool_size": 65535, 00:06:23.809 "bdev_io_cache_size": 256, 00:06:23.809 "bdev_auto_examine": true, 00:06:23.809 "iobuf_small_cache_size": 128, 00:06:23.809 "iobuf_large_cache_size": 16 00:06:23.809 } 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "method": "bdev_raid_set_options", 00:06:23.809 "params": { 00:06:23.809 "process_window_size_kb": 1024, 00:06:23.809 "process_max_bandwidth_mb_sec": 0 00:06:23.809 } 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "method": "bdev_iscsi_set_options", 00:06:23.809 "params": { 00:06:23.809 "timeout_sec": 30 00:06:23.809 } 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "method": "bdev_nvme_set_options", 00:06:23.809 "params": { 00:06:23.809 "action_on_timeout": "none", 00:06:23.809 "timeout_us": 0, 00:06:23.809 "timeout_admin_us": 0, 00:06:23.809 "keep_alive_timeout_ms": 10000, 00:06:23.809 "arbitration_burst": 0, 00:06:23.809 "low_priority_weight": 0, 00:06:23.809 "medium_priority_weight": 0, 00:06:23.809 "high_priority_weight": 0, 00:06:23.809 "nvme_adminq_poll_period_us": 10000, 00:06:23.809 "nvme_ioq_poll_period_us": 0, 00:06:23.809 "io_queue_requests": 0, 00:06:23.809 "delay_cmd_submit": true, 00:06:23.809 "transport_retry_count": 4, 00:06:23.809 "bdev_retry_count": 3, 00:06:23.809 "transport_ack_timeout": 0, 00:06:23.809 "ctrlr_loss_timeout_sec": 0, 00:06:23.809 "reconnect_delay_sec": 0, 00:06:23.809 "fast_io_fail_timeout_sec": 0, 00:06:23.809 "disable_auto_failback": false, 00:06:23.809 "generate_uuids": false, 00:06:23.809 "transport_tos": 0, 00:06:23.809 "nvme_error_stat": false, 00:06:23.809 "rdma_srq_size": 0, 00:06:23.809 "io_path_stat": false, 00:06:23.809 "allow_accel_sequence": false, 00:06:23.809 "rdma_max_cq_size": 0, 00:06:23.809 "rdma_cm_event_timeout_ms": 0, 00:06:23.809 "dhchap_digests": [ 00:06:23.809 "sha256", 00:06:23.809 "sha384", 00:06:23.809 "sha512" 00:06:23.809 ], 00:06:23.809 "dhchap_dhgroups": [ 00:06:23.809 "null", 00:06:23.809 "ffdhe2048", 00:06:23.809 "ffdhe3072", 00:06:23.809 "ffdhe4096", 00:06:23.809 "ffdhe6144", 00:06:23.809 "ffdhe8192" 00:06:23.809 ] 00:06:23.809 } 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "method": "bdev_nvme_set_hotplug", 00:06:23.809 "params": { 00:06:23.809 "period_us": 100000, 00:06:23.809 "enable": false 00:06:23.809 } 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "method": "bdev_wait_for_examine" 00:06:23.809 } 00:06:23.809 ] 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "subsystem": "scsi", 00:06:23.809 "config": null 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "subsystem": "scheduler", 00:06:23.809 "config": [ 00:06:23.809 { 00:06:23.809 "method": "framework_set_scheduler", 00:06:23.809 "params": { 00:06:23.809 "name": "static" 00:06:23.809 } 00:06:23.809 } 00:06:23.809 ] 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "subsystem": "vhost_scsi", 00:06:23.809 "config": [] 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "subsystem": "vhost_blk", 00:06:23.809 "config": [] 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "subsystem": "ublk", 00:06:23.809 "config": [] 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "subsystem": "nbd", 00:06:23.809 "config": [] 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "subsystem": "nvmf", 00:06:23.809 "config": [ 00:06:23.809 { 00:06:23.809 "method": "nvmf_set_config", 00:06:23.809 "params": { 00:06:23.809 "discovery_filter": "match_any", 00:06:23.809 "admin_cmd_passthru": { 00:06:23.809 "identify_ctrlr": false 00:06:23.809 }, 00:06:23.809 "dhchap_digests": [ 00:06:23.809 "sha256", 00:06:23.809 "sha384", 00:06:23.809 "sha512" 00:06:23.809 ], 00:06:23.809 "dhchap_dhgroups": [ 00:06:23.809 "null", 00:06:23.809 "ffdhe2048", 00:06:23.809 "ffdhe3072", 00:06:23.809 "ffdhe4096", 00:06:23.809 "ffdhe6144", 00:06:23.809 "ffdhe8192" 00:06:23.809 ] 00:06:23.809 } 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "method": "nvmf_set_max_subsystems", 00:06:23.809 "params": { 00:06:23.809 "max_subsystems": 1024 00:06:23.809 } 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "method": "nvmf_set_crdt", 00:06:23.809 "params": { 00:06:23.809 "crdt1": 0, 00:06:23.809 "crdt2": 0, 00:06:23.809 "crdt3": 0 00:06:23.809 } 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "method": "nvmf_create_transport", 00:06:23.809 "params": { 00:06:23.809 "trtype": "TCP", 00:06:23.809 "max_queue_depth": 128, 00:06:23.809 "max_io_qpairs_per_ctrlr": 127, 00:06:23.809 "in_capsule_data_size": 4096, 00:06:23.809 "max_io_size": 131072, 00:06:23.809 "io_unit_size": 131072, 00:06:23.809 "max_aq_depth": 128, 00:06:23.809 "num_shared_buffers": 511, 00:06:23.809 "buf_cache_size": 4294967295, 00:06:23.809 "dif_insert_or_strip": false, 00:06:23.809 "zcopy": false, 00:06:23.809 "c2h_success": true, 00:06:23.809 "sock_priority": 0, 00:06:23.809 "abort_timeout_sec": 1, 00:06:23.809 "ack_timeout": 0, 00:06:23.809 "data_wr_pool_size": 0 00:06:23.809 } 00:06:23.809 } 00:06:23.809 ] 00:06:23.809 }, 00:06:23.809 { 00:06:23.809 "subsystem": "iscsi", 00:06:23.809 "config": [ 00:06:23.809 { 00:06:23.809 "method": "iscsi_set_options", 00:06:23.809 "params": { 00:06:23.809 "node_base": "iqn.2016-06.io.spdk", 00:06:23.809 "max_sessions": 128, 00:06:23.809 "max_connections_per_session": 2, 00:06:23.809 "max_queue_depth": 64, 00:06:23.809 "default_time2wait": 2, 00:06:23.809 "default_time2retain": 20, 00:06:23.809 "first_burst_length": 8192, 00:06:23.809 "immediate_data": true, 00:06:23.809 "allow_duplicated_isid": false, 00:06:23.809 "error_recovery_level": 0, 00:06:23.809 "nop_timeout": 60, 00:06:23.809 "nop_in_interval": 30, 00:06:23.809 "disable_chap": false, 00:06:23.809 "require_chap": false, 00:06:23.809 "mutual_chap": false, 00:06:23.809 "chap_group": 0, 00:06:23.809 "max_large_datain_per_connection": 64, 00:06:23.809 "max_r2t_per_connection": 4, 00:06:23.809 "pdu_pool_size": 36864, 00:06:23.809 "immediate_data_pool_size": 16384, 00:06:23.809 "data_out_pool_size": 2048 00:06:23.809 } 00:06:23.809 } 00:06:23.809 ] 00:06:23.809 } 00:06:23.809 ] 00:06:23.809 } 00:06:23.809 08:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:23.809 08:39:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57086 00:06:23.810 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57086 ']' 00:06:23.810 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57086 00:06:23.810 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:23.810 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.810 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57086 00:06:23.810 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.810 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.810 killing process with pid 57086 00:06:23.810 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57086' 00:06:23.810 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57086 00:06:23.810 08:39:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57086 00:06:24.375 08:39:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57111 00:06:24.375 08:39:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:24.375 08:39:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57111 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57111 ']' 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57111 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57111 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.679 killing process with pid 57111 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57111' 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57111 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57111 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:29.679 00:06:29.679 real 0m6.884s 00:06:29.679 user 0m6.243s 00:06:29.679 sys 0m0.866s 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.679 ************************************ 00:06:29.679 END TEST skip_rpc_with_json 00:06:29.679 ************************************ 00:06:29.679 08:40:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:29.679 08:40:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.679 08:40:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.679 08:40:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.679 ************************************ 00:06:29.679 START TEST skip_rpc_with_delay 00:06:29.679 ************************************ 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:29.679 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:29.939 [2024-11-20 08:40:00.640577] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:29.939 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:29.939 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.939 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.939 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.939 00:06:29.939 real 0m0.101s 00:06:29.939 user 0m0.065s 00:06:29.939 sys 0m0.035s 00:06:29.939 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.939 08:40:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:29.939 ************************************ 00:06:29.939 END TEST skip_rpc_with_delay 00:06:29.939 ************************************ 00:06:29.939 08:40:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:29.939 08:40:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:29.939 08:40:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:29.939 08:40:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.939 08:40:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.939 08:40:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.939 ************************************ 00:06:29.939 START TEST exit_on_failed_rpc_init 00:06:29.939 ************************************ 00:06:29.939 08:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:29.939 08:40:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57221 00:06:29.939 08:40:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57221 00:06:29.939 08:40:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.939 08:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57221 ']' 00:06:29.939 08:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.939 08:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.939 08:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.939 08:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.939 08:40:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:29.939 [2024-11-20 08:40:00.778741] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:29.939 [2024-11-20 08:40:00.778854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57221 ] 00:06:30.198 [2024-11-20 08:40:00.929223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.198 [2024-11-20 08:40:01.013653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.456 [2024-11-20 08:40:01.119828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.714 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.714 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:30.714 08:40:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:30.715 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:30.715 [2024-11-20 08:40:01.467665] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:30.715 [2024-11-20 08:40:01.467788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57231 ] 00:06:30.715 [2024-11-20 08:40:01.622110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.973 [2024-11-20 08:40:01.711515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.973 [2024-11-20 08:40:01.711631] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:30.973 [2024-11-20 08:40:01.711656] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:30.973 [2024-11-20 08:40:01.711671] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57221 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57221 ']' 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57221 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57221 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.973 killing process with pid 57221 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57221' 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57221 00:06:30.973 08:40:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57221 00:06:31.541 00:06:31.541 real 0m1.633s 00:06:31.541 user 0m1.679s 00:06:31.541 sys 0m0.491s 00:06:31.541 08:40:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.541 08:40:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:31.541 ************************************ 00:06:31.541 END TEST exit_on_failed_rpc_init 00:06:31.541 ************************************ 00:06:31.541 08:40:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:31.541 00:06:31.541 real 0m14.637s 00:06:31.541 user 0m13.291s 00:06:31.541 sys 0m2.000s 00:06:31.541 08:40:02 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.541 08:40:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.541 ************************************ 00:06:31.541 END TEST skip_rpc 00:06:31.541 ************************************ 00:06:31.541 08:40:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:31.541 08:40:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.541 08:40:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.541 08:40:02 -- common/autotest_common.sh@10 -- # set +x 00:06:31.541 ************************************ 00:06:31.541 START TEST rpc_client 00:06:31.541 ************************************ 00:06:31.541 08:40:02 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:31.799 * Looking for test storage... 00:06:31.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:31.799 08:40:02 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.799 08:40:02 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.799 08:40:02 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.799 08:40:02 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.799 08:40:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.799 08:40:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.799 08:40:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.800 08:40:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:31.800 08:40:02 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.800 08:40:02 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.800 --rc genhtml_branch_coverage=1 00:06:31.800 --rc genhtml_function_coverage=1 00:06:31.800 --rc genhtml_legend=1 00:06:31.800 --rc geninfo_all_blocks=1 00:06:31.800 --rc geninfo_unexecuted_blocks=1 00:06:31.800 00:06:31.800 ' 00:06:31.800 08:40:02 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.800 --rc genhtml_branch_coverage=1 00:06:31.800 --rc genhtml_function_coverage=1 00:06:31.800 --rc genhtml_legend=1 00:06:31.800 --rc geninfo_all_blocks=1 00:06:31.800 --rc geninfo_unexecuted_blocks=1 00:06:31.800 00:06:31.800 ' 00:06:31.800 08:40:02 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.800 --rc genhtml_branch_coverage=1 00:06:31.800 --rc genhtml_function_coverage=1 00:06:31.800 --rc genhtml_legend=1 00:06:31.800 --rc geninfo_all_blocks=1 00:06:31.800 --rc geninfo_unexecuted_blocks=1 00:06:31.800 00:06:31.800 ' 00:06:31.800 08:40:02 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.800 --rc genhtml_branch_coverage=1 00:06:31.800 --rc genhtml_function_coverage=1 00:06:31.800 --rc genhtml_legend=1 00:06:31.800 --rc geninfo_all_blocks=1 00:06:31.800 --rc geninfo_unexecuted_blocks=1 00:06:31.800 00:06:31.800 ' 00:06:31.800 08:40:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:31.800 OK 00:06:31.800 08:40:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:31.800 00:06:31.800 real 0m0.222s 00:06:31.800 user 0m0.139s 00:06:31.800 sys 0m0.094s 00:06:31.800 08:40:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.800 08:40:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:31.800 ************************************ 00:06:31.800 END TEST rpc_client 00:06:31.800 ************************************ 00:06:32.059 08:40:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:32.059 08:40:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.059 08:40:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.059 08:40:02 -- common/autotest_common.sh@10 -- # set +x 00:06:32.059 ************************************ 00:06:32.059 START TEST json_config 00:06:32.059 ************************************ 00:06:32.059 08:40:02 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:32.059 08:40:02 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.059 08:40:02 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.059 08:40:02 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.059 08:40:02 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.059 08:40:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.059 08:40:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.059 08:40:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.059 08:40:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.059 08:40:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.059 08:40:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.059 08:40:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.059 08:40:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.059 08:40:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.059 08:40:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.059 08:40:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.059 08:40:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:32.059 08:40:02 json_config -- scripts/common.sh@345 -- # : 1 00:06:32.059 08:40:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.059 08:40:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.059 08:40:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:32.059 08:40:02 json_config -- scripts/common.sh@353 -- # local d=1 00:06:32.059 08:40:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.059 08:40:02 json_config -- scripts/common.sh@355 -- # echo 1 00:06:32.059 08:40:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.059 08:40:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:32.059 08:40:02 json_config -- scripts/common.sh@353 -- # local d=2 00:06:32.059 08:40:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.059 08:40:02 json_config -- scripts/common.sh@355 -- # echo 2 00:06:32.059 08:40:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.059 08:40:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.059 08:40:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.059 08:40:02 json_config -- scripts/common.sh@368 -- # return 0 00:06:32.059 08:40:02 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.059 08:40:02 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.059 --rc genhtml_branch_coverage=1 00:06:32.059 --rc genhtml_function_coverage=1 00:06:32.059 --rc genhtml_legend=1 00:06:32.059 --rc geninfo_all_blocks=1 00:06:32.059 --rc geninfo_unexecuted_blocks=1 00:06:32.059 00:06:32.059 ' 00:06:32.059 08:40:02 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.059 --rc genhtml_branch_coverage=1 00:06:32.059 --rc genhtml_function_coverage=1 00:06:32.059 --rc genhtml_legend=1 00:06:32.059 --rc geninfo_all_blocks=1 00:06:32.059 --rc geninfo_unexecuted_blocks=1 00:06:32.059 00:06:32.059 ' 00:06:32.059 08:40:02 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.059 --rc genhtml_branch_coverage=1 00:06:32.059 --rc genhtml_function_coverage=1 00:06:32.059 --rc genhtml_legend=1 00:06:32.059 --rc geninfo_all_blocks=1 00:06:32.059 --rc geninfo_unexecuted_blocks=1 00:06:32.059 00:06:32.059 ' 00:06:32.059 08:40:02 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.059 --rc genhtml_branch_coverage=1 00:06:32.059 --rc genhtml_function_coverage=1 00:06:32.059 --rc genhtml_legend=1 00:06:32.059 --rc geninfo_all_blocks=1 00:06:32.059 --rc geninfo_unexecuted_blocks=1 00:06:32.059 00:06:32.059 ' 00:06:32.059 08:40:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:32.059 08:40:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.059 08:40:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.059 08:40:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.059 08:40:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.059 08:40:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.059 08:40:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.059 08:40:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.059 08:40:02 json_config -- paths/export.sh@5 -- # export PATH 00:06:32.059 08:40:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@51 -- # : 0 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.059 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.059 08:40:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.059 08:40:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:32.059 08:40:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:32.059 08:40:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:32.059 08:40:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:32.059 08:40:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:32.059 08:40:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:32.059 08:40:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:32.059 08:40:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:32.059 08:40:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:32.059 08:40:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:32.060 08:40:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:32.060 08:40:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:32.060 08:40:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:32.060 08:40:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:32.060 08:40:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:32.060 INFO: JSON configuration test init 00:06:32.060 08:40:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:32.060 08:40:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:32.060 08:40:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:32.060 08:40:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.060 08:40:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.060 08:40:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:32.060 08:40:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.060 08:40:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.060 08:40:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:32.060 08:40:02 json_config -- json_config/common.sh@9 -- # local app=target 00:06:32.060 08:40:02 json_config -- json_config/common.sh@10 -- # shift 00:06:32.060 08:40:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:32.060 08:40:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:32.060 08:40:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:32.060 08:40:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:32.060 08:40:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:32.060 08:40:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57371 00:06:32.060 Waiting for target to run... 00:06:32.060 08:40:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:32.060 08:40:02 json_config -- json_config/common.sh@25 -- # waitforlisten 57371 /var/tmp/spdk_tgt.sock 00:06:32.060 08:40:02 json_config -- common/autotest_common.sh@835 -- # '[' -z 57371 ']' 00:06:32.060 08:40:02 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:32.060 08:40:02 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:32.060 08:40:02 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:32.060 08:40:02 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:32.060 08:40:02 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.060 08:40:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.318 [2024-11-20 08:40:03.013283] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:32.318 [2024-11-20 08:40:03.013403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57371 ] 00:06:32.885 [2024-11-20 08:40:03.520056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.885 [2024-11-20 08:40:03.595509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.453 08:40:04 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.453 00:06:33.453 08:40:04 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:33.453 08:40:04 json_config -- json_config/common.sh@26 -- # echo '' 00:06:33.453 08:40:04 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:33.453 08:40:04 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:33.453 08:40:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.453 08:40:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.453 08:40:04 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:33.453 08:40:04 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:33.453 08:40:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.453 08:40:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.453 08:40:04 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:33.453 08:40:04 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:33.453 08:40:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:33.712 [2024-11-20 08:40:04.468059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.982 08:40:04 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:33.982 08:40:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:33.982 08:40:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.982 08:40:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.982 08:40:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:33.982 08:40:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:33.982 08:40:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:33.982 08:40:04 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:33.982 08:40:04 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:33.982 08:40:04 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:33.982 08:40:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:33.982 08:40:04 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@54 -- # sort 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:34.245 08:40:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.245 08:40:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:34.245 08:40:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.245 08:40:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:34.245 08:40:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:34.245 08:40:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:34.504 MallocForNvmf0 00:06:34.504 08:40:05 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:34.504 08:40:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:34.762 MallocForNvmf1 00:06:34.762 08:40:05 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:34.762 08:40:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:35.021 [2024-11-20 08:40:05.880633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.021 08:40:05 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:35.021 08:40:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:35.280 08:40:06 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:35.280 08:40:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:35.538 08:40:06 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:35.538 08:40:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:36.124 08:40:06 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:36.124 08:40:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:36.124 [2024-11-20 08:40:06.997340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:36.124 08:40:07 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:36.124 08:40:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.124 08:40:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.383 08:40:07 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:36.383 08:40:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.383 08:40:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.383 08:40:07 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:36.383 08:40:07 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:36.383 08:40:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:36.642 MallocBdevForConfigChangeCheck 00:06:36.642 08:40:07 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:36.642 08:40:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.642 08:40:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.642 08:40:07 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:36.642 08:40:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.253 INFO: shutting down applications... 00:06:37.253 08:40:07 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:37.253 08:40:07 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:37.253 08:40:07 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:37.253 08:40:07 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:37.253 08:40:07 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:37.514 Calling clear_iscsi_subsystem 00:06:37.514 Calling clear_nvmf_subsystem 00:06:37.514 Calling clear_nbd_subsystem 00:06:37.514 Calling clear_ublk_subsystem 00:06:37.514 Calling clear_vhost_blk_subsystem 00:06:37.514 Calling clear_vhost_scsi_subsystem 00:06:37.514 Calling clear_bdev_subsystem 00:06:37.514 08:40:08 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:37.514 08:40:08 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:37.514 08:40:08 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:37.514 08:40:08 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.514 08:40:08 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:37.514 08:40:08 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:37.774 08:40:08 json_config -- json_config/json_config.sh@352 -- # break 00:06:37.774 08:40:08 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:37.774 08:40:08 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:37.774 08:40:08 json_config -- json_config/common.sh@31 -- # local app=target 00:06:37.774 08:40:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:37.774 08:40:08 json_config -- json_config/common.sh@35 -- # [[ -n 57371 ]] 00:06:37.774 08:40:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57371 00:06:37.774 08:40:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:37.774 08:40:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.774 08:40:08 json_config -- json_config/common.sh@41 -- # kill -0 57371 00:06:37.774 08:40:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:38.341 08:40:09 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:38.341 08:40:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.341 08:40:09 json_config -- json_config/common.sh@41 -- # kill -0 57371 00:06:38.341 08:40:09 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:38.341 08:40:09 json_config -- json_config/common.sh@43 -- # break 00:06:38.341 08:40:09 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:38.341 SPDK target shutdown done 00:06:38.341 08:40:09 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:38.341 INFO: relaunching applications... 00:06:38.341 08:40:09 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:38.341 08:40:09 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:38.341 08:40:09 json_config -- json_config/common.sh@9 -- # local app=target 00:06:38.341 08:40:09 json_config -- json_config/common.sh@10 -- # shift 00:06:38.341 08:40:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:38.341 08:40:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:38.341 08:40:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:38.341 08:40:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.341 08:40:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.341 08:40:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57572 00:06:38.341 Waiting for target to run... 00:06:38.341 08:40:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:38.341 08:40:09 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:38.341 08:40:09 json_config -- json_config/common.sh@25 -- # waitforlisten 57572 /var/tmp/spdk_tgt.sock 00:06:38.341 08:40:09 json_config -- common/autotest_common.sh@835 -- # '[' -z 57572 ']' 00:06:38.341 08:40:09 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:38.341 08:40:09 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:38.341 08:40:09 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:38.341 08:40:09 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.341 08:40:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.341 [2024-11-20 08:40:09.185831] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:38.341 [2024-11-20 08:40:09.185935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57572 ] 00:06:38.907 [2024-11-20 08:40:09.627287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.907 [2024-11-20 08:40:09.690712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.164 [2024-11-20 08:40:09.833537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.164 [2024-11-20 08:40:10.067542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.422 [2024-11-20 08:40:10.099679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:39.422 08:40:10 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.422 08:40:10 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:39.422 00:06:39.422 08:40:10 json_config -- json_config/common.sh@26 -- # echo '' 00:06:39.422 08:40:10 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:39.422 INFO: Checking if target configuration is the same... 00:06:39.423 08:40:10 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:39.423 08:40:10 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:39.423 08:40:10 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:39.423 08:40:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:39.423 + '[' 2 -ne 2 ']' 00:06:39.423 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:39.423 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:39.423 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:39.423 +++ basename /dev/fd/62 00:06:39.423 ++ mktemp /tmp/62.XXX 00:06:39.423 + tmp_file_1=/tmp/62.NlT 00:06:39.423 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:39.423 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:39.423 + tmp_file_2=/tmp/spdk_tgt_config.json.mXs 00:06:39.423 + ret=0 00:06:39.423 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:39.990 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:39.990 + diff -u /tmp/62.NlT /tmp/spdk_tgt_config.json.mXs 00:06:39.990 INFO: JSON config files are the same 00:06:39.990 + echo 'INFO: JSON config files are the same' 00:06:39.990 + rm /tmp/62.NlT /tmp/spdk_tgt_config.json.mXs 00:06:39.990 + exit 0 00:06:39.990 08:40:10 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:39.990 INFO: changing configuration and checking if this can be detected... 00:06:39.990 08:40:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:39.990 08:40:10 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:39.990 08:40:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:40.249 08:40:11 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:40.249 08:40:11 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:40.249 08:40:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:40.249 + '[' 2 -ne 2 ']' 00:06:40.249 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:40.249 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:40.249 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:40.249 +++ basename /dev/fd/62 00:06:40.249 ++ mktemp /tmp/62.XXX 00:06:40.249 + tmp_file_1=/tmp/62.c1b 00:06:40.249 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:40.249 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:40.249 + tmp_file_2=/tmp/spdk_tgt_config.json.v1t 00:06:40.249 + ret=0 00:06:40.249 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:40.816 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:40.816 + diff -u /tmp/62.c1b /tmp/spdk_tgt_config.json.v1t 00:06:40.816 + ret=1 00:06:40.816 + echo '=== Start of file: /tmp/62.c1b ===' 00:06:40.816 + cat /tmp/62.c1b 00:06:40.816 + echo '=== End of file: /tmp/62.c1b ===' 00:06:40.816 + echo '' 00:06:40.816 + echo '=== Start of file: /tmp/spdk_tgt_config.json.v1t ===' 00:06:40.816 + cat /tmp/spdk_tgt_config.json.v1t 00:06:40.816 + echo '=== End of file: /tmp/spdk_tgt_config.json.v1t ===' 00:06:40.816 + echo '' 00:06:40.816 + rm /tmp/62.c1b /tmp/spdk_tgt_config.json.v1t 00:06:40.816 + exit 1 00:06:40.816 INFO: configuration change detected. 00:06:40.816 08:40:11 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:40.816 08:40:11 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:40.816 08:40:11 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:40.816 08:40:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.816 08:40:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.816 08:40:11 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:40.816 08:40:11 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:40.816 08:40:11 json_config -- json_config/json_config.sh@324 -- # [[ -n 57572 ]] 00:06:40.816 08:40:11 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:40.816 08:40:11 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:40.816 08:40:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.816 08:40:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.816 08:40:11 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:40.816 08:40:11 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:40.817 08:40:11 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:40.817 08:40:11 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:40.817 08:40:11 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:40.817 08:40:11 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.817 08:40:11 json_config -- json_config/json_config.sh@330 -- # killprocess 57572 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@954 -- # '[' -z 57572 ']' 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@958 -- # kill -0 57572 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@959 -- # uname 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57572 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.817 killing process with pid 57572 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57572' 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@973 -- # kill 57572 00:06:40.817 08:40:11 json_config -- common/autotest_common.sh@978 -- # wait 57572 00:06:41.384 08:40:12 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:41.384 08:40:12 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:41.384 08:40:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.384 08:40:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.384 08:40:12 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:41.384 INFO: Success 00:06:41.384 08:40:12 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:41.384 00:06:41.384 real 0m9.332s 00:06:41.384 user 0m13.487s 00:06:41.384 sys 0m1.984s 00:06:41.384 08:40:12 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.384 ************************************ 00:06:41.384 END TEST json_config 00:06:41.384 ************************************ 00:06:41.384 08:40:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.384 08:40:12 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:41.384 08:40:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.384 08:40:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.384 08:40:12 -- common/autotest_common.sh@10 -- # set +x 00:06:41.384 ************************************ 00:06:41.384 START TEST json_config_extra_key 00:06:41.384 ************************************ 00:06:41.384 08:40:12 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:41.384 08:40:12 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.384 08:40:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.384 08:40:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.384 08:40:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.384 08:40:12 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.385 08:40:12 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:41.385 08:40:12 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.385 08:40:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.385 --rc genhtml_branch_coverage=1 00:06:41.385 --rc genhtml_function_coverage=1 00:06:41.385 --rc genhtml_legend=1 00:06:41.385 --rc geninfo_all_blocks=1 00:06:41.385 --rc geninfo_unexecuted_blocks=1 00:06:41.385 00:06:41.385 ' 00:06:41.385 08:40:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.385 --rc genhtml_branch_coverage=1 00:06:41.385 --rc genhtml_function_coverage=1 00:06:41.385 --rc genhtml_legend=1 00:06:41.385 --rc geninfo_all_blocks=1 00:06:41.385 --rc geninfo_unexecuted_blocks=1 00:06:41.385 00:06:41.385 ' 00:06:41.385 08:40:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.385 --rc genhtml_branch_coverage=1 00:06:41.385 --rc genhtml_function_coverage=1 00:06:41.385 --rc genhtml_legend=1 00:06:41.385 --rc geninfo_all_blocks=1 00:06:41.385 --rc geninfo_unexecuted_blocks=1 00:06:41.385 00:06:41.385 ' 00:06:41.385 08:40:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.385 --rc genhtml_branch_coverage=1 00:06:41.385 --rc genhtml_function_coverage=1 00:06:41.385 --rc genhtml_legend=1 00:06:41.385 --rc geninfo_all_blocks=1 00:06:41.385 --rc geninfo_unexecuted_blocks=1 00:06:41.385 00:06:41.385 ' 00:06:41.385 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:41.385 08:40:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.644 08:40:12 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.644 08:40:12 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.644 08:40:12 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.644 08:40:12 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.644 08:40:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.644 08:40:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.644 08:40:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.644 08:40:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:41.644 08:40:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.644 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.644 08:40:12 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:41.645 INFO: launching applications... 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:41.645 08:40:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:41.645 08:40:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:41.645 08:40:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:41.645 08:40:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:41.645 08:40:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:41.645 08:40:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:41.645 08:40:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.645 08:40:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.645 08:40:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57726 00:06:41.645 Waiting for target to run... 00:06:41.645 08:40:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:41.645 08:40:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57726 /var/tmp/spdk_tgt.sock 00:06:41.645 08:40:12 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57726 ']' 00:06:41.645 08:40:12 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:41.645 08:40:12 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:41.645 08:40:12 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:41.645 08:40:12 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:41.645 08:40:12 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.645 08:40:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:41.645 [2024-11-20 08:40:12.397716] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:41.645 [2024-11-20 08:40:12.397867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57726 ] 00:06:42.213 [2024-11-20 08:40:12.884101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.213 [2024-11-20 08:40:12.961828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.213 [2024-11-20 08:40:13.001043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.801 08:40:13 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.801 00:06:42.801 08:40:13 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:42.801 08:40:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:42.801 INFO: shutting down applications... 00:06:42.801 08:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:42.801 08:40:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:42.801 08:40:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:42.801 08:40:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:42.801 08:40:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57726 ]] 00:06:42.801 08:40:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57726 00:06:42.801 08:40:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:42.801 08:40:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.801 08:40:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57726 00:06:42.801 08:40:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:43.367 08:40:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:43.367 08:40:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:43.367 08:40:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57726 00:06:43.367 08:40:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:43.626 08:40:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:43.627 08:40:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:43.627 08:40:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57726 00:06:43.627 08:40:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:43.627 08:40:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:43.627 08:40:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:43.627 08:40:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:43.627 SPDK target shutdown done 00:06:43.627 Success 00:06:43.627 08:40:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:43.627 ************************************ 00:06:43.627 END TEST json_config_extra_key 00:06:43.627 ************************************ 00:06:43.627 00:06:43.627 real 0m2.421s 00:06:43.627 user 0m2.103s 00:06:43.627 sys 0m0.550s 00:06:43.627 08:40:14 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.627 08:40:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:43.886 08:40:14 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:43.886 08:40:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.886 08:40:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.886 08:40:14 -- common/autotest_common.sh@10 -- # set +x 00:06:43.886 ************************************ 00:06:43.886 START TEST alias_rpc 00:06:43.886 ************************************ 00:06:43.886 08:40:14 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:43.886 * Looking for test storage... 00:06:43.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:43.886 08:40:14 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.886 08:40:14 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.886 08:40:14 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.886 08:40:14 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.886 08:40:14 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:44.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.145 08:40:14 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:44.145 08:40:14 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.145 08:40:14 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.145 --rc genhtml_branch_coverage=1 00:06:44.145 --rc genhtml_function_coverage=1 00:06:44.145 --rc genhtml_legend=1 00:06:44.145 --rc geninfo_all_blocks=1 00:06:44.145 --rc geninfo_unexecuted_blocks=1 00:06:44.145 00:06:44.145 ' 00:06:44.145 08:40:14 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.145 --rc genhtml_branch_coverage=1 00:06:44.145 --rc genhtml_function_coverage=1 00:06:44.145 --rc genhtml_legend=1 00:06:44.145 --rc geninfo_all_blocks=1 00:06:44.145 --rc geninfo_unexecuted_blocks=1 00:06:44.145 00:06:44.145 ' 00:06:44.145 08:40:14 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.145 --rc genhtml_branch_coverage=1 00:06:44.145 --rc genhtml_function_coverage=1 00:06:44.145 --rc genhtml_legend=1 00:06:44.145 --rc geninfo_all_blocks=1 00:06:44.145 --rc geninfo_unexecuted_blocks=1 00:06:44.145 00:06:44.145 ' 00:06:44.145 08:40:14 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.145 --rc genhtml_branch_coverage=1 00:06:44.146 --rc genhtml_function_coverage=1 00:06:44.146 --rc genhtml_legend=1 00:06:44.146 --rc geninfo_all_blocks=1 00:06:44.146 --rc geninfo_unexecuted_blocks=1 00:06:44.146 00:06:44.146 ' 00:06:44.146 08:40:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:44.146 08:40:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57810 00:06:44.146 08:40:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57810 00:06:44.146 08:40:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.146 08:40:14 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57810 ']' 00:06:44.146 08:40:14 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.146 08:40:14 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.146 08:40:14 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.146 08:40:14 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.146 08:40:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.146 [2024-11-20 08:40:14.885162] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:44.146 [2024-11-20 08:40:14.885598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57810 ] 00:06:44.146 [2024-11-20 08:40:15.035381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.405 [2024-11-20 08:40:15.122496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.405 [2024-11-20 08:40:15.228061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.343 08:40:15 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.343 08:40:16 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:45.343 08:40:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:45.602 08:40:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57810 00:06:45.602 08:40:16 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57810 ']' 00:06:45.602 08:40:16 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57810 00:06:45.602 08:40:16 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:45.602 08:40:16 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.602 08:40:16 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57810 00:06:45.602 killing process with pid 57810 00:06:45.602 08:40:16 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.602 08:40:16 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.602 08:40:16 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57810' 00:06:45.602 08:40:16 alias_rpc -- common/autotest_common.sh@973 -- # kill 57810 00:06:45.602 08:40:16 alias_rpc -- common/autotest_common.sh@978 -- # wait 57810 00:06:46.170 ************************************ 00:06:46.170 END TEST alias_rpc 00:06:46.170 ************************************ 00:06:46.170 00:06:46.170 real 0m2.379s 00:06:46.170 user 0m2.686s 00:06:46.170 sys 0m0.613s 00:06:46.170 08:40:16 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.170 08:40:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 08:40:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:46.170 08:40:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:46.170 08:40:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.170 08:40:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.170 08:40:17 -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 ************************************ 00:06:46.170 START TEST spdkcli_tcp 00:06:46.170 ************************************ 00:06:46.170 08:40:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:46.429 * Looking for test storage... 00:06:46.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.429 08:40:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.429 --rc genhtml_branch_coverage=1 00:06:46.429 --rc genhtml_function_coverage=1 00:06:46.429 --rc genhtml_legend=1 00:06:46.429 --rc geninfo_all_blocks=1 00:06:46.429 --rc geninfo_unexecuted_blocks=1 00:06:46.429 00:06:46.429 ' 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.429 --rc genhtml_branch_coverage=1 00:06:46.429 --rc genhtml_function_coverage=1 00:06:46.429 --rc genhtml_legend=1 00:06:46.429 --rc geninfo_all_blocks=1 00:06:46.429 --rc geninfo_unexecuted_blocks=1 00:06:46.429 00:06:46.429 ' 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.429 --rc genhtml_branch_coverage=1 00:06:46.429 --rc genhtml_function_coverage=1 00:06:46.429 --rc genhtml_legend=1 00:06:46.429 --rc geninfo_all_blocks=1 00:06:46.429 --rc geninfo_unexecuted_blocks=1 00:06:46.429 00:06:46.429 ' 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.429 --rc genhtml_branch_coverage=1 00:06:46.429 --rc genhtml_function_coverage=1 00:06:46.429 --rc genhtml_legend=1 00:06:46.429 --rc geninfo_all_blocks=1 00:06:46.429 --rc geninfo_unexecuted_blocks=1 00:06:46.429 00:06:46.429 ' 00:06:46.429 08:40:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:46.429 08:40:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:46.429 08:40:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:46.429 08:40:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:46.429 08:40:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:46.429 08:40:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:46.429 08:40:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.429 08:40:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57900 00:06:46.429 08:40:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:46.429 08:40:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57900 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57900 ']' 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.429 08:40:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.429 [2024-11-20 08:40:17.286729] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:46.429 [2024-11-20 08:40:17.287057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57900 ] 00:06:46.688 [2024-11-20 08:40:17.429792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.688 [2024-11-20 08:40:17.510911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.688 [2024-11-20 08:40:17.510930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.947 [2024-11-20 08:40:17.608319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.205 08:40:17 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.205 08:40:17 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:47.205 08:40:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57909 00:06:47.205 08:40:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:47.205 08:40:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:47.464 [ 00:06:47.464 "bdev_malloc_delete", 00:06:47.464 "bdev_malloc_create", 00:06:47.464 "bdev_null_resize", 00:06:47.464 "bdev_null_delete", 00:06:47.464 "bdev_null_create", 00:06:47.464 "bdev_nvme_cuse_unregister", 00:06:47.464 "bdev_nvme_cuse_register", 00:06:47.464 "bdev_opal_new_user", 00:06:47.464 "bdev_opal_set_lock_state", 00:06:47.464 "bdev_opal_delete", 00:06:47.464 "bdev_opal_get_info", 00:06:47.464 "bdev_opal_create", 00:06:47.464 "bdev_nvme_opal_revert", 00:06:47.464 "bdev_nvme_opal_init", 00:06:47.464 "bdev_nvme_send_cmd", 00:06:47.464 "bdev_nvme_set_keys", 00:06:47.464 "bdev_nvme_get_path_iostat", 00:06:47.464 "bdev_nvme_get_mdns_discovery_info", 00:06:47.464 "bdev_nvme_stop_mdns_discovery", 00:06:47.464 "bdev_nvme_start_mdns_discovery", 00:06:47.464 "bdev_nvme_set_multipath_policy", 00:06:47.464 "bdev_nvme_set_preferred_path", 00:06:47.464 "bdev_nvme_get_io_paths", 00:06:47.464 "bdev_nvme_remove_error_injection", 00:06:47.464 "bdev_nvme_add_error_injection", 00:06:47.464 "bdev_nvme_get_discovery_info", 00:06:47.464 "bdev_nvme_stop_discovery", 00:06:47.464 "bdev_nvme_start_discovery", 00:06:47.464 "bdev_nvme_get_controller_health_info", 00:06:47.464 "bdev_nvme_disable_controller", 00:06:47.464 "bdev_nvme_enable_controller", 00:06:47.464 "bdev_nvme_reset_controller", 00:06:47.465 "bdev_nvme_get_transport_statistics", 00:06:47.465 "bdev_nvme_apply_firmware", 00:06:47.465 "bdev_nvme_detach_controller", 00:06:47.465 "bdev_nvme_get_controllers", 00:06:47.465 "bdev_nvme_attach_controller", 00:06:47.465 "bdev_nvme_set_hotplug", 00:06:47.465 "bdev_nvme_set_options", 00:06:47.465 "bdev_passthru_delete", 00:06:47.465 "bdev_passthru_create", 00:06:47.465 "bdev_lvol_set_parent_bdev", 00:06:47.465 "bdev_lvol_set_parent", 00:06:47.465 "bdev_lvol_check_shallow_copy", 00:06:47.465 "bdev_lvol_start_shallow_copy", 00:06:47.465 "bdev_lvol_grow_lvstore", 00:06:47.465 "bdev_lvol_get_lvols", 00:06:47.465 "bdev_lvol_get_lvstores", 00:06:47.465 "bdev_lvol_delete", 00:06:47.465 "bdev_lvol_set_read_only", 00:06:47.465 "bdev_lvol_resize", 00:06:47.465 "bdev_lvol_decouple_parent", 00:06:47.465 "bdev_lvol_inflate", 00:06:47.465 "bdev_lvol_rename", 00:06:47.465 "bdev_lvol_clone_bdev", 00:06:47.465 "bdev_lvol_clone", 00:06:47.465 "bdev_lvol_snapshot", 00:06:47.465 "bdev_lvol_create", 00:06:47.465 "bdev_lvol_delete_lvstore", 00:06:47.465 "bdev_lvol_rename_lvstore", 00:06:47.465 "bdev_lvol_create_lvstore", 00:06:47.465 "bdev_raid_set_options", 00:06:47.465 "bdev_raid_remove_base_bdev", 00:06:47.465 "bdev_raid_add_base_bdev", 00:06:47.465 "bdev_raid_delete", 00:06:47.465 "bdev_raid_create", 00:06:47.465 "bdev_raid_get_bdevs", 00:06:47.465 "bdev_error_inject_error", 00:06:47.465 "bdev_error_delete", 00:06:47.465 "bdev_error_create", 00:06:47.465 "bdev_split_delete", 00:06:47.465 "bdev_split_create", 00:06:47.465 "bdev_delay_delete", 00:06:47.465 "bdev_delay_create", 00:06:47.465 "bdev_delay_update_latency", 00:06:47.465 "bdev_zone_block_delete", 00:06:47.465 "bdev_zone_block_create", 00:06:47.465 "blobfs_create", 00:06:47.465 "blobfs_detect", 00:06:47.465 "blobfs_set_cache_size", 00:06:47.465 "bdev_aio_delete", 00:06:47.465 "bdev_aio_rescan", 00:06:47.465 "bdev_aio_create", 00:06:47.465 "bdev_ftl_set_property", 00:06:47.465 "bdev_ftl_get_properties", 00:06:47.465 "bdev_ftl_get_stats", 00:06:47.465 "bdev_ftl_unmap", 00:06:47.465 "bdev_ftl_unload", 00:06:47.465 "bdev_ftl_delete", 00:06:47.465 "bdev_ftl_load", 00:06:47.465 "bdev_ftl_create", 00:06:47.465 "bdev_virtio_attach_controller", 00:06:47.465 "bdev_virtio_scsi_get_devices", 00:06:47.465 "bdev_virtio_detach_controller", 00:06:47.465 "bdev_virtio_blk_set_hotplug", 00:06:47.465 "bdev_iscsi_delete", 00:06:47.465 "bdev_iscsi_create", 00:06:47.465 "bdev_iscsi_set_options", 00:06:47.465 "bdev_uring_delete", 00:06:47.465 "bdev_uring_rescan", 00:06:47.465 "bdev_uring_create", 00:06:47.465 "accel_error_inject_error", 00:06:47.465 "ioat_scan_accel_module", 00:06:47.465 "dsa_scan_accel_module", 00:06:47.465 "iaa_scan_accel_module", 00:06:47.465 "keyring_file_remove_key", 00:06:47.465 "keyring_file_add_key", 00:06:47.465 "keyring_linux_set_options", 00:06:47.465 "fsdev_aio_delete", 00:06:47.465 "fsdev_aio_create", 00:06:47.465 "iscsi_get_histogram", 00:06:47.465 "iscsi_enable_histogram", 00:06:47.465 "iscsi_set_options", 00:06:47.465 "iscsi_get_auth_groups", 00:06:47.465 "iscsi_auth_group_remove_secret", 00:06:47.465 "iscsi_auth_group_add_secret", 00:06:47.465 "iscsi_delete_auth_group", 00:06:47.465 "iscsi_create_auth_group", 00:06:47.465 "iscsi_set_discovery_auth", 00:06:47.465 "iscsi_get_options", 00:06:47.465 "iscsi_target_node_request_logout", 00:06:47.465 "iscsi_target_node_set_redirect", 00:06:47.465 "iscsi_target_node_set_auth", 00:06:47.465 "iscsi_target_node_add_lun", 00:06:47.465 "iscsi_get_stats", 00:06:47.465 "iscsi_get_connections", 00:06:47.465 "iscsi_portal_group_set_auth", 00:06:47.465 "iscsi_start_portal_group", 00:06:47.465 "iscsi_delete_portal_group", 00:06:47.465 "iscsi_create_portal_group", 00:06:47.465 "iscsi_get_portal_groups", 00:06:47.465 "iscsi_delete_target_node", 00:06:47.465 "iscsi_target_node_remove_pg_ig_maps", 00:06:47.465 "iscsi_target_node_add_pg_ig_maps", 00:06:47.465 "iscsi_create_target_node", 00:06:47.465 "iscsi_get_target_nodes", 00:06:47.465 "iscsi_delete_initiator_group", 00:06:47.465 "iscsi_initiator_group_remove_initiators", 00:06:47.465 "iscsi_initiator_group_add_initiators", 00:06:47.465 "iscsi_create_initiator_group", 00:06:47.465 "iscsi_get_initiator_groups", 00:06:47.465 "nvmf_set_crdt", 00:06:47.465 "nvmf_set_config", 00:06:47.465 "nvmf_set_max_subsystems", 00:06:47.465 "nvmf_stop_mdns_prr", 00:06:47.465 "nvmf_publish_mdns_prr", 00:06:47.465 "nvmf_subsystem_get_listeners", 00:06:47.465 "nvmf_subsystem_get_qpairs", 00:06:47.465 "nvmf_subsystem_get_controllers", 00:06:47.465 "nvmf_get_stats", 00:06:47.465 "nvmf_get_transports", 00:06:47.465 "nvmf_create_transport", 00:06:47.465 "nvmf_get_targets", 00:06:47.465 "nvmf_delete_target", 00:06:47.465 "nvmf_create_target", 00:06:47.465 "nvmf_subsystem_allow_any_host", 00:06:47.465 "nvmf_subsystem_set_keys", 00:06:47.465 "nvmf_subsystem_remove_host", 00:06:47.465 "nvmf_subsystem_add_host", 00:06:47.465 "nvmf_ns_remove_host", 00:06:47.465 "nvmf_ns_add_host", 00:06:47.465 "nvmf_subsystem_remove_ns", 00:06:47.465 "nvmf_subsystem_set_ns_ana_group", 00:06:47.465 "nvmf_subsystem_add_ns", 00:06:47.465 "nvmf_subsystem_listener_set_ana_state", 00:06:47.465 "nvmf_discovery_get_referrals", 00:06:47.465 "nvmf_discovery_remove_referral", 00:06:47.465 "nvmf_discovery_add_referral", 00:06:47.465 "nvmf_subsystem_remove_listener", 00:06:47.465 "nvmf_subsystem_add_listener", 00:06:47.465 "nvmf_delete_subsystem", 00:06:47.465 "nvmf_create_subsystem", 00:06:47.465 "nvmf_get_subsystems", 00:06:47.465 "env_dpdk_get_mem_stats", 00:06:47.465 "nbd_get_disks", 00:06:47.465 "nbd_stop_disk", 00:06:47.465 "nbd_start_disk", 00:06:47.465 "ublk_recover_disk", 00:06:47.465 "ublk_get_disks", 00:06:47.465 "ublk_stop_disk", 00:06:47.465 "ublk_start_disk", 00:06:47.465 "ublk_destroy_target", 00:06:47.465 "ublk_create_target", 00:06:47.465 "virtio_blk_create_transport", 00:06:47.465 "virtio_blk_get_transports", 00:06:47.465 "vhost_controller_set_coalescing", 00:06:47.465 "vhost_get_controllers", 00:06:47.465 "vhost_delete_controller", 00:06:47.465 "vhost_create_blk_controller", 00:06:47.465 "vhost_scsi_controller_remove_target", 00:06:47.465 "vhost_scsi_controller_add_target", 00:06:47.465 "vhost_start_scsi_controller", 00:06:47.465 "vhost_create_scsi_controller", 00:06:47.465 "thread_set_cpumask", 00:06:47.465 "scheduler_set_options", 00:06:47.465 "framework_get_governor", 00:06:47.465 "framework_get_scheduler", 00:06:47.465 "framework_set_scheduler", 00:06:47.465 "framework_get_reactors", 00:06:47.465 "thread_get_io_channels", 00:06:47.465 "thread_get_pollers", 00:06:47.465 "thread_get_stats", 00:06:47.465 "framework_monitor_context_switch", 00:06:47.465 "spdk_kill_instance", 00:06:47.465 "log_enable_timestamps", 00:06:47.465 "log_get_flags", 00:06:47.465 "log_clear_flag", 00:06:47.465 "log_set_flag", 00:06:47.465 "log_get_level", 00:06:47.465 "log_set_level", 00:06:47.465 "log_get_print_level", 00:06:47.465 "log_set_print_level", 00:06:47.465 "framework_enable_cpumask_locks", 00:06:47.465 "framework_disable_cpumask_locks", 00:06:47.465 "framework_wait_init", 00:06:47.465 "framework_start_init", 00:06:47.465 "scsi_get_devices", 00:06:47.465 "bdev_get_histogram", 00:06:47.465 "bdev_enable_histogram", 00:06:47.465 "bdev_set_qos_limit", 00:06:47.465 "bdev_set_qd_sampling_period", 00:06:47.465 "bdev_get_bdevs", 00:06:47.465 "bdev_reset_iostat", 00:06:47.465 "bdev_get_iostat", 00:06:47.465 "bdev_examine", 00:06:47.465 "bdev_wait_for_examine", 00:06:47.465 "bdev_set_options", 00:06:47.465 "accel_get_stats", 00:06:47.465 "accel_set_options", 00:06:47.465 "accel_set_driver", 00:06:47.465 "accel_crypto_key_destroy", 00:06:47.465 "accel_crypto_keys_get", 00:06:47.465 "accel_crypto_key_create", 00:06:47.465 "accel_assign_opc", 00:06:47.465 "accel_get_module_info", 00:06:47.465 "accel_get_opc_assignments", 00:06:47.465 "vmd_rescan", 00:06:47.465 "vmd_remove_device", 00:06:47.465 "vmd_enable", 00:06:47.465 "sock_get_default_impl", 00:06:47.465 "sock_set_default_impl", 00:06:47.465 "sock_impl_set_options", 00:06:47.465 "sock_impl_get_options", 00:06:47.465 "iobuf_get_stats", 00:06:47.465 "iobuf_set_options", 00:06:47.465 "keyring_get_keys", 00:06:47.465 "framework_get_pci_devices", 00:06:47.465 "framework_get_config", 00:06:47.465 "framework_get_subsystems", 00:06:47.465 "fsdev_set_opts", 00:06:47.465 "fsdev_get_opts", 00:06:47.465 "trace_get_info", 00:06:47.466 "trace_get_tpoint_group_mask", 00:06:47.466 "trace_disable_tpoint_group", 00:06:47.466 "trace_enable_tpoint_group", 00:06:47.466 "trace_clear_tpoint_mask", 00:06:47.466 "trace_set_tpoint_mask", 00:06:47.466 "notify_get_notifications", 00:06:47.466 "notify_get_types", 00:06:47.466 "spdk_get_version", 00:06:47.466 "rpc_get_methods" 00:06:47.466 ] 00:06:47.466 08:40:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.466 08:40:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:47.466 08:40:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57900 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57900 ']' 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57900 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57900 00:06:47.466 killing process with pid 57900 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57900' 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57900 00:06:47.466 08:40:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57900 00:06:48.033 ************************************ 00:06:48.033 END TEST spdkcli_tcp 00:06:48.033 ************************************ 00:06:48.033 00:06:48.033 real 0m1.819s 00:06:48.033 user 0m3.120s 00:06:48.033 sys 0m0.579s 00:06:48.033 08:40:18 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.033 08:40:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.033 08:40:18 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:48.033 08:40:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.033 08:40:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.033 08:40:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.033 ************************************ 00:06:48.034 START TEST dpdk_mem_utility 00:06:48.034 ************************************ 00:06:48.034 08:40:18 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:48.293 * Looking for test storage... 00:06:48.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:48.293 08:40:18 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.293 08:40:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.293 08:40:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.293 08:40:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.293 --rc genhtml_branch_coverage=1 00:06:48.293 --rc genhtml_function_coverage=1 00:06:48.293 --rc genhtml_legend=1 00:06:48.293 --rc geninfo_all_blocks=1 00:06:48.293 --rc geninfo_unexecuted_blocks=1 00:06:48.293 00:06:48.293 ' 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.293 --rc genhtml_branch_coverage=1 00:06:48.293 --rc genhtml_function_coverage=1 00:06:48.293 --rc genhtml_legend=1 00:06:48.293 --rc geninfo_all_blocks=1 00:06:48.293 --rc geninfo_unexecuted_blocks=1 00:06:48.293 00:06:48.293 ' 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.293 --rc genhtml_branch_coverage=1 00:06:48.293 --rc genhtml_function_coverage=1 00:06:48.293 --rc genhtml_legend=1 00:06:48.293 --rc geninfo_all_blocks=1 00:06:48.293 --rc geninfo_unexecuted_blocks=1 00:06:48.293 00:06:48.293 ' 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.293 --rc genhtml_branch_coverage=1 00:06:48.293 --rc genhtml_function_coverage=1 00:06:48.293 --rc genhtml_legend=1 00:06:48.293 --rc geninfo_all_blocks=1 00:06:48.293 --rc geninfo_unexecuted_blocks=1 00:06:48.293 00:06:48.293 ' 00:06:48.293 08:40:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:48.293 08:40:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57991 00:06:48.293 08:40:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:48.293 08:40:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57991 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57991 ']' 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.293 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:48.293 [2024-11-20 08:40:19.172319] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:48.293 [2024-11-20 08:40:19.173229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57991 ] 00:06:48.551 [2024-11-20 08:40:19.324090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.551 [2024-11-20 08:40:19.400490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.810 [2024-11-20 08:40:19.500998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.070 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.070 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:49.070 08:40:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:49.070 08:40:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:49.070 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.070 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.070 { 00:06:49.070 "filename": "/tmp/spdk_mem_dump.txt" 00:06:49.070 } 00:06:49.070 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.070 08:40:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:49.070 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:49.070 1 heaps totaling size 810.000000 MiB 00:06:49.070 size: 810.000000 MiB heap id: 0 00:06:49.070 end heaps---------- 00:06:49.070 9 mempools totaling size 595.772034 MiB 00:06:49.070 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:49.070 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:49.070 size: 92.545471 MiB name: bdev_io_57991 00:06:49.070 size: 50.003479 MiB name: msgpool_57991 00:06:49.070 size: 36.509338 MiB name: fsdev_io_57991 00:06:49.070 size: 21.763794 MiB name: PDU_Pool 00:06:49.070 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:49.070 size: 4.133484 MiB name: evtpool_57991 00:06:49.070 size: 0.026123 MiB name: Session_Pool 00:06:49.070 end mempools------- 00:06:49.070 6 memzones totaling size 4.142822 MiB 00:06:49.070 size: 1.000366 MiB name: RG_ring_0_57991 00:06:49.070 size: 1.000366 MiB name: RG_ring_1_57991 00:06:49.070 size: 1.000366 MiB name: RG_ring_4_57991 00:06:49.070 size: 1.000366 MiB name: RG_ring_5_57991 00:06:49.070 size: 0.125366 MiB name: RG_ring_2_57991 00:06:49.070 size: 0.015991 MiB name: RG_ring_3_57991 00:06:49.070 end memzones------- 00:06:49.070 08:40:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:49.070 heap id: 0 total size: 810.000000 MiB number of busy elements: 310 number of free elements: 15 00:06:49.070 list of free elements. size: 10.813782 MiB 00:06:49.070 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:49.070 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:49.070 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:49.070 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:49.070 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:49.070 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:49.070 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:49.070 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:49.070 element at address: 0x20001a600000 with size: 0.568237 MiB 00:06:49.070 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:49.070 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:49.070 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:49.070 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:49.070 element at address: 0x200027a00000 with size: 0.395752 MiB 00:06:49.070 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:49.070 list of standard malloc elements. size: 199.267334 MiB 00:06:49.070 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:49.070 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:49.070 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:49.070 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:49.070 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:49.070 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:49.070 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:49.070 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:49.070 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:49.070 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:49.070 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:49.070 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:49.070 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:49.071 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:49.071 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:49.071 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:49.071 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:49.072 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a65500 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:49.072 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:49.072 list of memzone associated elements. size: 599.918884 MiB 00:06:49.072 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:49.072 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:49.072 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:49.072 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:49.072 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:49.072 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57991_0 00:06:49.072 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:49.072 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57991_0 00:06:49.072 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:49.072 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57991_0 00:06:49.072 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:49.072 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:49.072 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:49.072 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:49.072 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:49.072 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57991_0 00:06:49.072 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:49.072 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57991 00:06:49.072 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:49.072 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57991 00:06:49.072 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:49.072 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:49.072 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:49.072 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:49.072 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:49.072 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:49.073 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:49.073 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:49.073 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:49.073 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57991 00:06:49.073 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:49.073 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57991 00:06:49.073 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:49.073 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57991 00:06:49.073 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:49.073 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57991 00:06:49.073 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:49.073 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57991 00:06:49.073 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:49.073 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57991 00:06:49.073 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:49.073 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:49.073 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:49.073 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:49.073 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:49.073 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:49.073 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:49.073 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57991 00:06:49.073 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:49.073 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57991 00:06:49.073 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:49.073 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:49.073 element at address: 0x200027a65680 with size: 0.023743 MiB 00:06:49.073 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:49.073 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:49.073 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57991 00:06:49.073 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:06:49.073 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:49.073 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:49.073 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57991 00:06:49.073 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:49.073 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57991 00:06:49.073 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:49.073 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57991 00:06:49.073 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:06:49.073 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:49.073 08:40:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:49.073 08:40:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57991 00:06:49.073 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57991 ']' 00:06:49.073 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57991 00:06:49.073 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:49.073 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.073 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57991 00:06:49.073 killing process with pid 57991 00:06:49.073 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.073 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.073 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57991' 00:06:49.073 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57991 00:06:49.073 08:40:19 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57991 00:06:49.640 00:06:49.640 real 0m1.642s 00:06:49.640 user 0m1.522s 00:06:49.640 sys 0m0.534s 00:06:49.640 08:40:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.640 08:40:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.640 ************************************ 00:06:49.640 END TEST dpdk_mem_utility 00:06:49.640 ************************************ 00:06:49.900 08:40:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:49.900 08:40:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.900 08:40:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.900 08:40:20 -- common/autotest_common.sh@10 -- # set +x 00:06:49.900 ************************************ 00:06:49.900 START TEST event 00:06:49.900 ************************************ 00:06:49.900 08:40:20 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:49.900 * Looking for test storage... 00:06:49.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:49.900 08:40:20 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.900 08:40:20 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.900 08:40:20 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.900 08:40:20 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.900 08:40:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.900 08:40:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.900 08:40:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.900 08:40:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.900 08:40:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.900 08:40:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.900 08:40:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.900 08:40:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.900 08:40:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.900 08:40:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.900 08:40:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.900 08:40:20 event -- scripts/common.sh@344 -- # case "$op" in 00:06:49.900 08:40:20 event -- scripts/common.sh@345 -- # : 1 00:06:49.900 08:40:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.900 08:40:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.900 08:40:20 event -- scripts/common.sh@365 -- # decimal 1 00:06:49.900 08:40:20 event -- scripts/common.sh@353 -- # local d=1 00:06:49.900 08:40:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.900 08:40:20 event -- scripts/common.sh@355 -- # echo 1 00:06:49.900 08:40:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.900 08:40:20 event -- scripts/common.sh@366 -- # decimal 2 00:06:49.900 08:40:20 event -- scripts/common.sh@353 -- # local d=2 00:06:49.900 08:40:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.900 08:40:20 event -- scripts/common.sh@355 -- # echo 2 00:06:49.900 08:40:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.900 08:40:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.900 08:40:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.900 08:40:20 event -- scripts/common.sh@368 -- # return 0 00:06:49.900 08:40:20 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.900 08:40:20 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.900 --rc genhtml_branch_coverage=1 00:06:49.900 --rc genhtml_function_coverage=1 00:06:49.900 --rc genhtml_legend=1 00:06:49.900 --rc geninfo_all_blocks=1 00:06:49.900 --rc geninfo_unexecuted_blocks=1 00:06:49.900 00:06:49.900 ' 00:06:49.900 08:40:20 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.900 --rc genhtml_branch_coverage=1 00:06:49.900 --rc genhtml_function_coverage=1 00:06:49.900 --rc genhtml_legend=1 00:06:49.900 --rc geninfo_all_blocks=1 00:06:49.900 --rc geninfo_unexecuted_blocks=1 00:06:49.900 00:06:49.900 ' 00:06:49.900 08:40:20 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.900 --rc genhtml_branch_coverage=1 00:06:49.900 --rc genhtml_function_coverage=1 00:06:49.900 --rc genhtml_legend=1 00:06:49.900 --rc geninfo_all_blocks=1 00:06:49.900 --rc geninfo_unexecuted_blocks=1 00:06:49.900 00:06:49.900 ' 00:06:49.900 08:40:20 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.900 --rc genhtml_branch_coverage=1 00:06:49.900 --rc genhtml_function_coverage=1 00:06:49.900 --rc genhtml_legend=1 00:06:49.900 --rc geninfo_all_blocks=1 00:06:49.900 --rc geninfo_unexecuted_blocks=1 00:06:49.900 00:06:49.900 ' 00:06:49.900 08:40:20 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:49.900 08:40:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:49.900 08:40:20 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:49.901 08:40:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:49.901 08:40:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.901 08:40:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.901 ************************************ 00:06:49.901 START TEST event_perf 00:06:49.901 ************************************ 00:06:49.901 08:40:20 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:50.160 Running I/O for 1 seconds...[2024-11-20 08:40:20.828407] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:50.160 [2024-11-20 08:40:20.828616] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58069 ] 00:06:50.160 [2024-11-20 08:40:20.981746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.418 [2024-11-20 08:40:21.078091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.418 [2024-11-20 08:40:21.078251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.418 Running I/O for 1 seconds...[2024-11-20 08:40:21.078359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.418 [2024-11-20 08:40:21.078363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.353 00:06:51.353 lcore 0: 196364 00:06:51.353 lcore 1: 196368 00:06:51.353 lcore 2: 196372 00:06:51.353 lcore 3: 196376 00:06:51.353 done. 00:06:51.353 00:06:51.353 real 0m1.346s 00:06:51.353 user 0m4.152s 00:06:51.353 sys 0m0.066s 00:06:51.353 08:40:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.353 08:40:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.353 ************************************ 00:06:51.353 END TEST event_perf 00:06:51.353 ************************************ 00:06:51.353 08:40:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:51.353 08:40:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:51.354 08:40:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.354 08:40:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.354 ************************************ 00:06:51.354 START TEST event_reactor 00:06:51.354 ************************************ 00:06:51.354 08:40:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:51.354 [2024-11-20 08:40:22.230961] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:51.354 [2024-11-20 08:40:22.231050] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58113 ] 00:06:51.612 [2024-11-20 08:40:22.374412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.612 [2024-11-20 08:40:22.458127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.009 test_start 00:06:53.009 oneshot 00:06:53.009 tick 100 00:06:53.009 tick 100 00:06:53.009 tick 250 00:06:53.009 tick 100 00:06:53.009 tick 100 00:06:53.009 tick 100 00:06:53.009 tick 250 00:06:53.009 tick 500 00:06:53.009 tick 100 00:06:53.009 tick 100 00:06:53.009 tick 250 00:06:53.009 tick 100 00:06:53.009 tick 100 00:06:53.009 test_end 00:06:53.009 00:06:53.009 real 0m1.317s 00:06:53.009 user 0m1.163s 00:06:53.009 sys 0m0.047s 00:06:53.009 08:40:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.009 ************************************ 00:06:53.009 END TEST event_reactor 00:06:53.009 08:40:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:53.009 ************************************ 00:06:53.009 08:40:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:53.009 08:40:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:53.009 08:40:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.009 08:40:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.009 ************************************ 00:06:53.009 START TEST event_reactor_perf 00:06:53.009 ************************************ 00:06:53.009 08:40:23 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:53.009 [2024-11-20 08:40:23.601528] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:53.009 [2024-11-20 08:40:23.601687] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58143 ] 00:06:53.009 [2024-11-20 08:40:23.755833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.009 [2024-11-20 08:40:23.837771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.386 test_start 00:06:54.386 test_end 00:06:54.386 Performance: 349861 events per second 00:06:54.386 00:06:54.386 real 0m1.329s 00:06:54.386 user 0m1.167s 00:06:54.386 sys 0m0.055s 00:06:54.386 ************************************ 00:06:54.386 END TEST event_reactor_perf 00:06:54.386 ************************************ 00:06:54.386 08:40:24 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.386 08:40:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.386 08:40:24 event -- event/event.sh@49 -- # uname -s 00:06:54.386 08:40:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:54.386 08:40:24 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:54.386 08:40:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.386 08:40:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.386 08:40:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.386 ************************************ 00:06:54.386 START TEST event_scheduler 00:06:54.386 ************************************ 00:06:54.386 08:40:24 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:54.386 * Looking for test storage... 00:06:54.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:54.386 08:40:25 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.386 08:40:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.386 08:40:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.386 08:40:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.386 08:40:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.386 08:40:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.386 08:40:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.387 08:40:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:54.387 08:40:25 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.387 08:40:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.387 --rc genhtml_branch_coverage=1 00:06:54.387 --rc genhtml_function_coverage=1 00:06:54.387 --rc genhtml_legend=1 00:06:54.387 --rc geninfo_all_blocks=1 00:06:54.387 --rc geninfo_unexecuted_blocks=1 00:06:54.387 00:06:54.387 ' 00:06:54.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.387 08:40:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.387 --rc genhtml_branch_coverage=1 00:06:54.387 --rc genhtml_function_coverage=1 00:06:54.387 --rc genhtml_legend=1 00:06:54.387 --rc geninfo_all_blocks=1 00:06:54.387 --rc geninfo_unexecuted_blocks=1 00:06:54.387 00:06:54.387 ' 00:06:54.387 08:40:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.387 --rc genhtml_branch_coverage=1 00:06:54.387 --rc genhtml_function_coverage=1 00:06:54.387 --rc genhtml_legend=1 00:06:54.387 --rc geninfo_all_blocks=1 00:06:54.387 --rc geninfo_unexecuted_blocks=1 00:06:54.387 00:06:54.387 ' 00:06:54.387 08:40:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.387 --rc genhtml_branch_coverage=1 00:06:54.387 --rc genhtml_function_coverage=1 00:06:54.387 --rc genhtml_legend=1 00:06:54.387 --rc geninfo_all_blocks=1 00:06:54.387 --rc geninfo_unexecuted_blocks=1 00:06:54.387 00:06:54.387 ' 00:06:54.387 08:40:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:54.387 08:40:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58212 00:06:54.387 08:40:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.387 08:40:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58212 00:06:54.387 08:40:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:54.387 08:40:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58212 ']' 00:06:54.387 08:40:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.387 08:40:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.387 08:40:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.387 08:40:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.387 08:40:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.387 [2024-11-20 08:40:25.233337] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:54.387 [2024-11-20 08:40:25.233715] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58212 ] 00:06:54.645 [2024-11-20 08:40:25.387511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.645 [2024-11-20 08:40:25.482310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.645 [2024-11-20 08:40:25.482467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.645 [2024-11-20 08:40:25.482581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.645 [2024-11-20 08:40:25.482588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.645 08:40:25 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.645 08:40:25 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:54.645 08:40:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:54.645 08:40:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.645 08:40:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.645 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.645 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.645 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.645 POWER: Cannot set governor of lcore 0 to performance 00:06:54.645 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.645 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.645 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.645 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.645 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:54.645 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:54.645 POWER: Unable to set Power Management Environment for lcore 0 00:06:54.645 [2024-11-20 08:40:25.522092] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:54.645 [2024-11-20 08:40:25.522219] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:54.645 [2024-11-20 08:40:25.522272] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:54.645 [2024-11-20 08:40:25.522441] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:54.645 [2024-11-20 08:40:25.522463] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:54.645 [2024-11-20 08:40:25.522473] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:54.645 08:40:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.645 08:40:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:54.645 08:40:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.645 08:40:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.903 [2024-11-20 08:40:25.605784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.903 [2024-11-20 08:40:25.657830] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:54.903 08:40:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.903 08:40:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:54.903 08:40:25 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.903 08:40:25 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.903 08:40:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.903 ************************************ 00:06:54.903 START TEST scheduler_create_thread 00:06:54.903 ************************************ 00:06:54.903 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 2 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 3 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 4 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 5 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 6 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 7 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 8 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 9 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 10 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.904 08:40:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.471 08:40:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.471 08:40:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:55.471 08:40:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:55.471 08:40:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.471 08:40:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.849 ************************************ 00:06:56.849 END TEST scheduler_create_thread 00:06:56.849 ************************************ 00:06:56.849 08:40:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.849 00:06:56.849 real 0m1.755s 00:06:56.849 user 0m0.015s 00:06:56.849 sys 0m0.007s 00:06:56.849 08:40:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.849 08:40:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.849 08:40:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:56.849 08:40:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58212 00:06:56.849 08:40:27 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58212 ']' 00:06:56.849 08:40:27 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58212 00:06:56.849 08:40:27 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:56.849 08:40:27 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.849 08:40:27 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58212 00:06:56.849 killing process with pid 58212 00:06:56.849 08:40:27 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:56.849 08:40:27 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:56.849 08:40:27 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58212' 00:06:56.849 08:40:27 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58212 00:06:56.849 08:40:27 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58212 00:06:57.163 [2024-11-20 08:40:27.901277] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:57.422 00:06:57.422 real 0m3.200s 00:06:57.422 user 0m3.922s 00:06:57.422 sys 0m0.413s 00:06:57.422 08:40:28 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.422 ************************************ 00:06:57.422 END TEST event_scheduler 00:06:57.422 ************************************ 00:06:57.422 08:40:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.422 08:40:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:57.422 08:40:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:57.422 08:40:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.422 08:40:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.422 08:40:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.422 ************************************ 00:06:57.422 START TEST app_repeat 00:06:57.422 ************************************ 00:06:57.422 08:40:28 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:57.422 Process app_repeat pid: 58288 00:06:57.422 spdk_app_start Round 0 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58288 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58288' 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:57.422 08:40:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58288 /var/tmp/spdk-nbd.sock 00:06:57.422 08:40:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58288 ']' 00:06:57.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.422 08:40:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.422 08:40:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.422 08:40:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.423 08:40:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.423 08:40:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.423 [2024-11-20 08:40:28.257704] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:57.423 [2024-11-20 08:40:28.257883] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58288 ] 00:06:57.682 [2024-11-20 08:40:28.403164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.682 [2024-11-20 08:40:28.489178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.682 [2024-11-20 08:40:28.489185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.682 [2024-11-20 08:40:28.564897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.941 08:40:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.941 08:40:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:57.941 08:40:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.200 Malloc0 00:06:58.200 08:40:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.457 Malloc1 00:06:58.457 08:40:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.457 08:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.458 08:40:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.716 /dev/nbd0 00:06:58.716 08:40:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.716 08:40:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.716 1+0 records in 00:06:58.716 1+0 records out 00:06:58.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253609 s, 16.2 MB/s 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.716 08:40:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.716 08:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.716 08:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.716 08:40:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.975 /dev/nbd1 00:06:58.975 08:40:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.975 08:40:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.975 1+0 records in 00:06:58.975 1+0 records out 00:06:58.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329891 s, 12.4 MB/s 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.975 08:40:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.975 08:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.975 08:40:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.975 08:40:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.975 08:40:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.975 08:40:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.541 { 00:06:59.541 "nbd_device": "/dev/nbd0", 00:06:59.541 "bdev_name": "Malloc0" 00:06:59.541 }, 00:06:59.541 { 00:06:59.541 "nbd_device": "/dev/nbd1", 00:06:59.541 "bdev_name": "Malloc1" 00:06:59.541 } 00:06:59.541 ]' 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.541 { 00:06:59.541 "nbd_device": "/dev/nbd0", 00:06:59.541 "bdev_name": "Malloc0" 00:06:59.541 }, 00:06:59.541 { 00:06:59.541 "nbd_device": "/dev/nbd1", 00:06:59.541 "bdev_name": "Malloc1" 00:06:59.541 } 00:06:59.541 ]' 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.541 /dev/nbd1' 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.541 /dev/nbd1' 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.541 256+0 records in 00:06:59.541 256+0 records out 00:06:59.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00695469 s, 151 MB/s 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.541 256+0 records in 00:06:59.541 256+0 records out 00:06:59.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328217 s, 31.9 MB/s 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.541 256+0 records in 00:06:59.541 256+0 records out 00:06:59.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323014 s, 32.5 MB/s 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.541 08:40:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.542 08:40:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.542 08:40:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.799 08:40:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.799 08:40:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.799 08:40:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.799 08:40:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.799 08:40:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.799 08:40:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.799 08:40:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.799 08:40:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.799 08:40:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.799 08:40:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.365 08:40:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.365 08:40:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.365 08:40:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.365 08:40:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.365 08:40:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.365 08:40:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.365 08:40:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.365 08:40:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.365 08:40:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.365 08:40:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.365 08:40:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.623 08:40:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.623 08:40:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.623 08:40:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.623 08:40:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.623 08:40:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.624 08:40:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.624 08:40:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.624 08:40:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.624 08:40:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.624 08:40:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.624 08:40:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.624 08:40:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.624 08:40:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.882 08:40:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.140 [2024-11-20 08:40:31.939528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.140 [2024-11-20 08:40:32.018375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.140 [2024-11-20 08:40:32.018392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.399 [2024-11-20 08:40:32.098284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.399 [2024-11-20 08:40:32.098407] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.399 [2024-11-20 08:40:32.098425] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.987 08:40:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.987 08:40:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:03.987 spdk_app_start Round 1 00:07:03.987 08:40:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58288 /var/tmp/spdk-nbd.sock 00:07:03.987 08:40:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58288 ']' 00:07:03.987 08:40:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.987 08:40:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.987 08:40:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.987 08:40:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.987 08:40:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.245 08:40:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.245 08:40:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:04.245 08:40:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.504 Malloc0 00:07:04.504 08:40:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.762 Malloc1 00:07:05.020 08:40:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.020 08:40:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:05.278 /dev/nbd0 00:07:05.278 08:40:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.278 08:40:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.278 1+0 records in 00:07:05.278 1+0 records out 00:07:05.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326159 s, 12.6 MB/s 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.278 08:40:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.278 08:40:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.278 08:40:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.278 08:40:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.278 08:40:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.278 08:40:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.538 /dev/nbd1 00:07:05.538 08:40:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.538 08:40:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.538 1+0 records in 00:07:05.538 1+0 records out 00:07:05.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341339 s, 12.0 MB/s 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.538 08:40:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.538 08:40:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.538 08:40:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.538 08:40:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.538 08:40:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.538 08:40:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.800 08:40:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.800 { 00:07:05.800 "nbd_device": "/dev/nbd0", 00:07:05.800 "bdev_name": "Malloc0" 00:07:05.800 }, 00:07:05.800 { 00:07:05.800 "nbd_device": "/dev/nbd1", 00:07:05.800 "bdev_name": "Malloc1" 00:07:05.800 } 00:07:05.800 ]' 00:07:05.800 08:40:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.800 08:40:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.800 { 00:07:05.800 "nbd_device": "/dev/nbd0", 00:07:05.800 "bdev_name": "Malloc0" 00:07:05.800 }, 00:07:05.800 { 00:07:05.800 "nbd_device": "/dev/nbd1", 00:07:05.800 "bdev_name": "Malloc1" 00:07:05.800 } 00:07:05.800 ]' 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.058 /dev/nbd1' 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.058 /dev/nbd1' 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:06.058 256+0 records in 00:07:06.058 256+0 records out 00:07:06.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00708698 s, 148 MB/s 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.058 256+0 records in 00:07:06.058 256+0 records out 00:07:06.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279431 s, 37.5 MB/s 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.058 256+0 records in 00:07:06.058 256+0 records out 00:07:06.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0347141 s, 30.2 MB/s 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.058 08:40:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.317 08:40:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.317 08:40:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.317 08:40:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.317 08:40:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.317 08:40:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.317 08:40:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.317 08:40:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.317 08:40:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.317 08:40:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.317 08:40:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.575 08:40:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.575 08:40:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.575 08:40:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.575 08:40:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.575 08:40:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.575 08:40:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.575 08:40:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.575 08:40:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.575 08:40:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.575 08:40:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.575 08:40:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.834 08:40:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.092 08:40:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.092 08:40:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.092 08:40:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.092 08:40:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.092 08:40:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.092 08:40:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:07.092 08:40:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.092 08:40:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.092 08:40:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.092 08:40:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.092 08:40:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.092 08:40:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.351 08:40:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:07.609 [2024-11-20 08:40:38.365563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.609 [2024-11-20 08:40:38.437234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.609 [2024-11-20 08:40:38.437247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.609 [2024-11-20 08:40:38.510787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.609 [2024-11-20 08:40:38.510908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.609 [2024-11-20 08:40:38.510924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.894 08:40:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.894 08:40:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:10.894 spdk_app_start Round 2 00:07:10.894 08:40:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58288 /var/tmp/spdk-nbd.sock 00:07:10.894 08:40:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58288 ']' 00:07:10.894 08:40:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.894 08:40:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.894 08:40:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.894 08:40:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.894 08:40:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.894 08:40:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.894 08:40:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:10.894 08:40:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.894 Malloc0 00:07:10.894 08:40:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.152 Malloc1 00:07:11.152 08:40:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.152 08:40:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.152 08:40:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.152 08:40:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.152 08:40:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.152 08:40:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.152 08:40:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.152 08:40:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.153 08:40:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.153 08:40:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.153 08:40:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.153 08:40:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.153 08:40:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:11.153 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.153 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.153 08:40:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.412 /dev/nbd0 00:07:11.412 08:40:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.412 08:40:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.412 1+0 records in 00:07:11.412 1+0 records out 00:07:11.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336796 s, 12.2 MB/s 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.412 08:40:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:11.412 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.412 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.412 08:40:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:11.980 /dev/nbd1 00:07:11.980 08:40:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.980 08:40:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.980 1+0 records in 00:07:11.980 1+0 records out 00:07:11.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358005 s, 11.4 MB/s 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.980 08:40:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:11.980 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.980 08:40:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.980 08:40:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.980 08:40:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.980 08:40:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.239 { 00:07:12.239 "nbd_device": "/dev/nbd0", 00:07:12.239 "bdev_name": "Malloc0" 00:07:12.239 }, 00:07:12.239 { 00:07:12.239 "nbd_device": "/dev/nbd1", 00:07:12.239 "bdev_name": "Malloc1" 00:07:12.239 } 00:07:12.239 ]' 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.239 { 00:07:12.239 "nbd_device": "/dev/nbd0", 00:07:12.239 "bdev_name": "Malloc0" 00:07:12.239 }, 00:07:12.239 { 00:07:12.239 "nbd_device": "/dev/nbd1", 00:07:12.239 "bdev_name": "Malloc1" 00:07:12.239 } 00:07:12.239 ]' 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.239 /dev/nbd1' 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.239 /dev/nbd1' 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.239 256+0 records in 00:07:12.239 256+0 records out 00:07:12.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611666 s, 171 MB/s 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.239 256+0 records in 00:07:12.239 256+0 records out 00:07:12.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218581 s, 48.0 MB/s 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.239 08:40:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.498 256+0 records in 00:07:12.498 256+0 records out 00:07:12.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258969 s, 40.5 MB/s 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.498 08:40:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.499 08:40:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:12.499 08:40:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.499 08:40:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.757 08:40:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.757 08:40:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.757 08:40:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.757 08:40:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.757 08:40:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.757 08:40:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.757 08:40:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.757 08:40:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.757 08:40:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.757 08:40:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.015 08:40:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.016 08:40:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.016 08:40:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.016 08:40:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.016 08:40:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.016 08:40:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.016 08:40:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.016 08:40:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.016 08:40:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.016 08:40:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.016 08:40:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.277 08:40:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.277 08:40:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.846 08:40:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:13.846 [2024-11-20 08:40:44.710745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.105 [2024-11-20 08:40:44.786437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.105 [2024-11-20 08:40:44.786448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.105 [2024-11-20 08:40:44.859112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.105 [2024-11-20 08:40:44.859222] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.105 [2024-11-20 08:40:44.859238] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.727 08:40:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58288 /var/tmp/spdk-nbd.sock 00:07:16.727 08:40:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58288 ']' 00:07:16.727 08:40:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.727 08:40:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.727 08:40:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.727 08:40:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.727 08:40:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:16.987 08:40:47 event.app_repeat -- event/event.sh@39 -- # killprocess 58288 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58288 ']' 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58288 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58288 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.987 killing process with pid 58288 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58288' 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58288 00:07:16.987 08:40:47 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58288 00:07:17.254 spdk_app_start is called in Round 0. 00:07:17.254 Shutdown signal received, stop current app iteration 00:07:17.254 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:07:17.254 spdk_app_start is called in Round 1. 00:07:17.254 Shutdown signal received, stop current app iteration 00:07:17.254 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:07:17.254 spdk_app_start is called in Round 2. 00:07:17.254 Shutdown signal received, stop current app iteration 00:07:17.254 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:07:17.254 spdk_app_start is called in Round 3. 00:07:17.254 Shutdown signal received, stop current app iteration 00:07:17.254 08:40:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:17.254 08:40:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:17.254 00:07:17.254 real 0m19.861s 00:07:17.254 user 0m45.102s 00:07:17.254 sys 0m3.308s 00:07:17.254 08:40:48 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.254 08:40:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.254 ************************************ 00:07:17.254 END TEST app_repeat 00:07:17.254 ************************************ 00:07:17.254 08:40:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:17.254 08:40:48 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:17.254 08:40:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.254 08:40:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.254 08:40:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.254 ************************************ 00:07:17.254 START TEST cpu_locks 00:07:17.254 ************************************ 00:07:17.254 08:40:48 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:17.515 * Looking for test storage... 00:07:17.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:17.515 08:40:48 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:17.515 08:40:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:17.515 08:40:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:17.515 08:40:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.515 08:40:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:17.515 08:40:48 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.515 08:40:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:17.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.516 --rc genhtml_branch_coverage=1 00:07:17.516 --rc genhtml_function_coverage=1 00:07:17.516 --rc genhtml_legend=1 00:07:17.516 --rc geninfo_all_blocks=1 00:07:17.516 --rc geninfo_unexecuted_blocks=1 00:07:17.516 00:07:17.516 ' 00:07:17.516 08:40:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:17.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.516 --rc genhtml_branch_coverage=1 00:07:17.516 --rc genhtml_function_coverage=1 00:07:17.516 --rc genhtml_legend=1 00:07:17.516 --rc geninfo_all_blocks=1 00:07:17.516 --rc geninfo_unexecuted_blocks=1 00:07:17.516 00:07:17.516 ' 00:07:17.516 08:40:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:17.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.516 --rc genhtml_branch_coverage=1 00:07:17.516 --rc genhtml_function_coverage=1 00:07:17.516 --rc genhtml_legend=1 00:07:17.516 --rc geninfo_all_blocks=1 00:07:17.516 --rc geninfo_unexecuted_blocks=1 00:07:17.516 00:07:17.516 ' 00:07:17.516 08:40:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:17.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.516 --rc genhtml_branch_coverage=1 00:07:17.516 --rc genhtml_function_coverage=1 00:07:17.516 --rc genhtml_legend=1 00:07:17.516 --rc geninfo_all_blocks=1 00:07:17.516 --rc geninfo_unexecuted_blocks=1 00:07:17.516 00:07:17.516 ' 00:07:17.516 08:40:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:17.516 08:40:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:17.516 08:40:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:17.516 08:40:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:17.516 08:40:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.516 08:40:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.516 08:40:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.516 ************************************ 00:07:17.516 START TEST default_locks 00:07:17.516 ************************************ 00:07:17.516 08:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:17.516 08:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58743 00:07:17.516 08:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58743 00:07:17.516 08:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.516 08:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58743 ']' 00:07:17.516 08:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.516 08:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.516 08:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.516 08:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.516 08:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.516 [2024-11-20 08:40:48.396319] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:17.516 [2024-11-20 08:40:48.396439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58743 ] 00:07:17.775 [2024-11-20 08:40:48.545185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.775 [2024-11-20 08:40:48.629997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.034 [2024-11-20 08:40:48.728950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.292 08:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.292 08:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:18.292 08:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58743 00:07:18.292 08:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58743 00:07:18.292 08:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.550 08:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58743 00:07:18.550 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58743 ']' 00:07:18.550 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58743 00:07:18.550 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:18.550 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.550 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58743 00:07:18.808 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.808 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.808 killing process with pid 58743 00:07:18.808 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58743' 00:07:18.808 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58743 00:07:18.809 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58743 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58743 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58743 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58743 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58743 ']' 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.377 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58743) - No such process 00:07:19.377 ERROR: process (pid: 58743) is no longer running 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:19.377 00:07:19.377 real 0m1.658s 00:07:19.377 user 0m1.583s 00:07:19.377 sys 0m0.615s 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.377 08:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.377 ************************************ 00:07:19.377 END TEST default_locks 00:07:19.377 ************************************ 00:07:19.377 08:40:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:19.377 08:40:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.377 08:40:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.377 08:40:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.377 ************************************ 00:07:19.377 START TEST default_locks_via_rpc 00:07:19.377 ************************************ 00:07:19.377 08:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:19.377 08:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58788 00:07:19.377 08:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.377 08:40:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58788 00:07:19.377 08:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58788 ']' 00:07:19.377 08:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.377 08:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.377 08:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.377 08:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.377 08:40:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.377 [2024-11-20 08:40:50.101201] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:19.377 [2024-11-20 08:40:50.101313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58788 ] 00:07:19.377 [2024-11-20 08:40:50.245171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.636 [2024-11-20 08:40:50.324262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.636 [2024-11-20 08:40:50.422429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.635 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.635 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:20.635 08:40:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:20.635 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58788 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58788 00:07:20.636 08:40:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.901 08:40:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58788 00:07:20.901 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58788 ']' 00:07:20.901 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58788 00:07:20.901 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:20.901 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.901 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58788 00:07:20.901 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.901 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.901 killing process with pid 58788 00:07:20.901 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58788' 00:07:20.901 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58788 00:07:20.901 08:40:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58788 00:07:21.467 00:07:21.467 real 0m2.050s 00:07:21.467 user 0m2.173s 00:07:21.467 sys 0m0.626s 00:07:21.467 08:40:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.467 08:40:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.467 ************************************ 00:07:21.467 END TEST default_locks_via_rpc 00:07:21.467 ************************************ 00:07:21.467 08:40:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:21.467 08:40:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.467 08:40:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.467 08:40:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.467 ************************************ 00:07:21.467 START TEST non_locking_app_on_locked_coremask 00:07:21.467 ************************************ 00:07:21.467 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:21.467 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58839 00:07:21.468 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58839 /var/tmp/spdk.sock 00:07:21.468 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58839 ']' 00:07:21.468 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.468 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.468 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.468 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.468 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.468 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.468 [2024-11-20 08:40:52.219555] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:21.468 [2024-11-20 08:40:52.219690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58839 ] 00:07:21.468 [2024-11-20 08:40:52.367330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.727 [2024-11-20 08:40:52.446326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.727 [2024-11-20 08:40:52.545074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.985 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.985 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:21.985 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58847 00:07:21.985 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:21.985 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58847 /var/tmp/spdk2.sock 00:07:21.985 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58847 ']' 00:07:21.985 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.985 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.985 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.985 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.985 08:40:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.985 [2024-11-20 08:40:52.866780] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:21.985 [2024-11-20 08:40:52.866932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58847 ] 00:07:22.245 [2024-11-20 08:40:53.030042] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.245 [2024-11-20 08:40:53.030126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.503 [2024-11-20 08:40:53.203641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.503 [2024-11-20 08:40:53.402646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.069 08:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.069 08:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:23.069 08:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58839 00:07:23.069 08:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58839 00:07:23.069 08:40:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.004 08:40:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58839 00:07:24.004 08:40:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58839 ']' 00:07:24.004 08:40:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58839 00:07:24.004 08:40:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:24.004 08:40:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.004 08:40:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58839 00:07:24.004 08:40:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.004 killing process with pid 58839 00:07:24.004 08:40:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.004 08:40:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58839' 00:07:24.004 08:40:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58839 00:07:24.004 08:40:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58839 00:07:24.963 08:40:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58847 00:07:24.963 08:40:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58847 ']' 00:07:24.963 08:40:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58847 00:07:24.963 08:40:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:24.963 08:40:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.963 08:40:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58847 00:07:25.222 08:40:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.222 08:40:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.222 killing process with pid 58847 00:07:25.222 08:40:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58847' 00:07:25.222 08:40:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58847 00:07:25.222 08:40:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58847 00:07:25.789 00:07:25.789 real 0m4.255s 00:07:25.789 user 0m4.532s 00:07:25.789 sys 0m1.238s 00:07:25.789 08:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.789 08:40:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.789 ************************************ 00:07:25.789 END TEST non_locking_app_on_locked_coremask 00:07:25.789 ************************************ 00:07:25.789 08:40:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:25.789 08:40:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.789 08:40:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.789 08:40:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.789 ************************************ 00:07:25.789 START TEST locking_app_on_unlocked_coremask 00:07:25.789 ************************************ 00:07:25.789 08:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:25.789 08:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58925 00:07:25.789 08:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58925 /var/tmp/spdk.sock 00:07:25.789 08:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:25.789 08:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58925 ']' 00:07:25.789 08:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.789 08:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.789 08:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.789 08:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.789 08:40:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.789 [2024-11-20 08:40:56.522492] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:25.789 [2024-11-20 08:40:56.522614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58925 ] 00:07:25.789 [2024-11-20 08:40:56.675734] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.789 [2024-11-20 08:40:56.675825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.049 [2024-11-20 08:40:56.760254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.049 [2024-11-20 08:40:56.859345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.984 08:40:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.984 08:40:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:26.984 08:40:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58941 00:07:26.984 08:40:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58941 /var/tmp/spdk2.sock 00:07:26.984 08:40:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:26.984 08:40:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58941 ']' 00:07:26.984 08:40:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.984 08:40:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.984 08:40:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.984 08:40:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.984 08:40:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.984 [2024-11-20 08:40:57.677680] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:26.984 [2024-11-20 08:40:57.677817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58941 ] 00:07:26.984 [2024-11-20 08:40:57.839372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.243 [2024-11-20 08:40:58.003793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.501 [2024-11-20 08:40:58.205135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.068 08:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.068 08:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:28.068 08:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58941 00:07:28.068 08:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58941 00:07:28.068 08:40:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.765 08:40:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58925 00:07:28.765 08:40:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58925 ']' 00:07:28.765 08:40:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58925 00:07:28.765 08:40:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:28.765 08:40:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.765 08:40:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58925 00:07:28.765 08:40:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.765 killing process with pid 58925 00:07:28.765 08:40:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.765 08:40:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58925' 00:07:28.765 08:40:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58925 00:07:28.765 08:40:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58925 00:07:29.700 08:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58941 00:07:29.701 08:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58941 ']' 00:07:29.701 08:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58941 00:07:29.701 08:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:29.701 08:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.701 08:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58941 00:07:29.959 killing process with pid 58941 00:07:29.959 08:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.959 08:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.959 08:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58941' 00:07:29.959 08:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58941 00:07:29.959 08:41:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58941 00:07:30.525 00:07:30.525 real 0m4.683s 00:07:30.525 user 0m5.184s 00:07:30.525 sys 0m1.295s 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.525 ************************************ 00:07:30.525 END TEST locking_app_on_unlocked_coremask 00:07:30.525 ************************************ 00:07:30.525 08:41:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:30.525 08:41:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.525 08:41:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.525 08:41:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.525 ************************************ 00:07:30.525 START TEST locking_app_on_locked_coremask 00:07:30.525 ************************************ 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59014 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59014 /var/tmp/spdk.sock 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59014 ']' 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.525 08:41:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.525 [2024-11-20 08:41:01.270467] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:30.525 [2024-11-20 08:41:01.270595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59014 ] 00:07:30.525 [2024-11-20 08:41:01.421850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.783 [2024-11-20 08:41:01.508138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.783 [2024-11-20 08:41:01.606692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59030 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59030 /var/tmp/spdk2.sock 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59030 /var/tmp/spdk2.sock 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59030 /var/tmp/spdk2.sock 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59030 ']' 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.720 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.721 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.721 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.721 08:41:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.721 [2024-11-20 08:41:02.459842] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:31.721 [2024-11-20 08:41:02.459942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59030 ] 00:07:31.721 [2024-11-20 08:41:02.621295] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59014 has claimed it. 00:07:31.721 [2024-11-20 08:41:02.621395] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:32.353 ERROR: process (pid: 59030) is no longer running 00:07:32.353 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59030) - No such process 00:07:32.353 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.353 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:32.353 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:32.353 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.353 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.353 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.353 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59014 00:07:32.353 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59014 00:07:32.353 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:32.920 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59014 00:07:32.920 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59014 ']' 00:07:32.920 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59014 00:07:32.920 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:32.920 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.920 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59014 00:07:32.920 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.920 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.920 killing process with pid 59014 00:07:32.920 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59014' 00:07:32.920 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59014 00:07:32.920 08:41:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59014 00:07:33.487 00:07:33.487 real 0m2.955s 00:07:33.487 user 0m3.443s 00:07:33.487 sys 0m0.737s 00:07:33.487 08:41:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.487 08:41:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.487 ************************************ 00:07:33.487 END TEST locking_app_on_locked_coremask 00:07:33.487 ************************************ 00:07:33.487 08:41:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:33.487 08:41:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.487 08:41:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.487 08:41:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.487 ************************************ 00:07:33.487 START TEST locking_overlapped_coremask 00:07:33.487 ************************************ 00:07:33.487 08:41:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:33.487 08:41:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59081 00:07:33.487 08:41:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59081 /var/tmp/spdk.sock 00:07:33.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.487 08:41:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59081 ']' 00:07:33.487 08:41:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:33.487 08:41:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.487 08:41:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.487 08:41:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.487 08:41:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.487 08:41:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.487 [2024-11-20 08:41:04.267054] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:33.487 [2024-11-20 08:41:04.267175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59081 ] 00:07:33.747 [2024-11-20 08:41:04.420787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.747 [2024-11-20 08:41:04.511329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.747 [2024-11-20 08:41:04.511395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.747 [2024-11-20 08:41:04.511392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.747 [2024-11-20 08:41:04.613888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59099 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59099 /var/tmp/spdk2.sock 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59099 /var/tmp/spdk2.sock 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:34.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59099 /var/tmp/spdk2.sock 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59099 ']' 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.682 08:41:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.682 [2024-11-20 08:41:05.422325] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:34.682 [2024-11-20 08:41:05.422443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59099 ] 00:07:34.682 [2024-11-20 08:41:05.587765] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59081 has claimed it. 00:07:34.682 [2024-11-20 08:41:05.590846] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:35.617 ERROR: process (pid: 59099) is no longer running 00:07:35.617 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59099) - No such process 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59081 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59081 ']' 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59081 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:35.617 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.618 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59081 00:07:35.618 killing process with pid 59081 00:07:35.618 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.618 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.618 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59081' 00:07:35.618 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59081 00:07:35.618 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59081 00:07:35.876 00:07:35.876 real 0m2.549s 00:07:35.876 user 0m7.208s 00:07:35.876 sys 0m0.556s 00:07:35.876 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.876 ************************************ 00:07:35.876 END TEST locking_overlapped_coremask 00:07:35.876 ************************************ 00:07:35.876 08:41:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.876 08:41:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:35.876 08:41:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.876 08:41:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.876 08:41:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.135 ************************************ 00:07:36.135 START TEST locking_overlapped_coremask_via_rpc 00:07:36.135 ************************************ 00:07:36.135 08:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:36.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.135 08:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59139 00:07:36.135 08:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59139 /var/tmp/spdk.sock 00:07:36.135 08:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:36.135 08:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59139 ']' 00:07:36.135 08:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.135 08:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.135 08:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.135 08:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.135 08:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.135 [2024-11-20 08:41:06.867478] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:36.135 [2024-11-20 08:41:06.867907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59139 ] 00:07:36.135 [2024-11-20 08:41:07.015735] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:36.135 [2024-11-20 08:41:07.016120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.394 [2024-11-20 08:41:07.097715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.394 [2024-11-20 08:41:07.097826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.394 [2024-11-20 08:41:07.097833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.394 [2024-11-20 08:41:07.194361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.652 08:41:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.652 08:41:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:36.652 08:41:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59155 00:07:36.652 08:41:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59155 /var/tmp/spdk2.sock 00:07:36.652 08:41:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:36.652 08:41:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59155 ']' 00:07:36.652 08:41:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.652 08:41:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.652 08:41:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.652 08:41:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.652 08:41:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.652 [2024-11-20 08:41:07.532970] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:36.652 [2024-11-20 08:41:07.533419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59155 ] 00:07:36.931 [2024-11-20 08:41:07.699607] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:36.931 [2024-11-20 08:41:07.699691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.195 [2024-11-20 08:41:07.866790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.195 [2024-11-20 08:41:07.868934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:37.195 [2024-11-20 08:41:07.868940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.195 [2024-11-20 08:41:08.095767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.762 [2024-11-20 08:41:08.617980] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59139 has claimed it. 00:07:37.762 request: 00:07:37.762 { 00:07:37.762 "method": "framework_enable_cpumask_locks", 00:07:37.762 "req_id": 1 00:07:37.762 } 00:07:37.762 Got JSON-RPC error response 00:07:37.762 response: 00:07:37.762 { 00:07:37.762 "code": -32603, 00:07:37.762 "message": "Failed to claim CPU core: 2" 00:07:37.762 } 00:07:37.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59139 /var/tmp/spdk.sock 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59139 ']' 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.762 08:41:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.329 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:38.329 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:38.329 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59155 /var/tmp/spdk2.sock 00:07:38.329 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59155 ']' 00:07:38.329 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:38.329 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.329 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:38.329 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.329 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.587 ************************************ 00:07:38.587 END TEST locking_overlapped_coremask_via_rpc 00:07:38.587 ************************************ 00:07:38.587 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.587 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:38.587 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:38.587 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:38.587 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:38.587 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:38.587 00:07:38.587 real 0m2.528s 00:07:38.587 user 0m1.426s 00:07:38.587 sys 0m0.181s 00:07:38.587 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.587 08:41:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.587 08:41:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:38.587 08:41:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59139 ]] 00:07:38.587 08:41:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59139 00:07:38.587 08:41:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59139 ']' 00:07:38.587 08:41:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59139 00:07:38.587 08:41:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:38.587 08:41:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.587 08:41:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59139 00:07:38.587 killing process with pid 59139 00:07:38.587 08:41:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.587 08:41:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.587 08:41:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59139' 00:07:38.587 08:41:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59139 00:07:38.587 08:41:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59139 00:07:39.153 08:41:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59155 ]] 00:07:39.153 08:41:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59155 00:07:39.153 08:41:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59155 ']' 00:07:39.153 08:41:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59155 00:07:39.153 08:41:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:39.153 08:41:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.153 08:41:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59155 00:07:39.153 killing process with pid 59155 00:07:39.153 08:41:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:39.153 08:41:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:39.154 08:41:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59155' 00:07:39.154 08:41:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59155 00:07:39.154 08:41:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59155 00:07:39.720 08:41:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:39.720 08:41:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:39.720 08:41:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59139 ]] 00:07:39.720 08:41:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59139 00:07:39.720 08:41:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59139 ']' 00:07:39.720 08:41:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59139 00:07:39.720 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59139) - No such process 00:07:39.720 Process with pid 59139 is not found 00:07:39.720 Process with pid 59155 is not found 00:07:39.720 08:41:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59139 is not found' 00:07:39.720 08:41:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59155 ]] 00:07:39.720 08:41:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59155 00:07:39.721 08:41:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59155 ']' 00:07:39.721 08:41:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59155 00:07:39.721 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59155) - No such process 00:07:39.721 08:41:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59155 is not found' 00:07:39.721 08:41:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:39.721 00:07:39.721 real 0m22.356s 00:07:39.721 user 0m39.260s 00:07:39.721 sys 0m6.316s 00:07:39.721 08:41:10 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.721 08:41:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.721 ************************************ 00:07:39.721 END TEST cpu_locks 00:07:39.721 ************************************ 00:07:39.721 00:07:39.721 real 0m49.947s 00:07:39.721 user 1m34.992s 00:07:39.721 sys 0m10.495s 00:07:39.721 08:41:10 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.721 ************************************ 00:07:39.721 END TEST event 00:07:39.721 ************************************ 00:07:39.721 08:41:10 event -- common/autotest_common.sh@10 -- # set +x 00:07:39.721 08:41:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:39.721 08:41:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.721 08:41:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.721 08:41:10 -- common/autotest_common.sh@10 -- # set +x 00:07:39.721 ************************************ 00:07:39.721 START TEST thread 00:07:39.721 ************************************ 00:07:39.721 08:41:10 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:39.979 * Looking for test storage... 00:07:39.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:39.979 08:41:10 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.979 08:41:10 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.979 08:41:10 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.979 08:41:10 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.979 08:41:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.979 08:41:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.979 08:41:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.979 08:41:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.979 08:41:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.979 08:41:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.979 08:41:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.979 08:41:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.979 08:41:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.979 08:41:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.979 08:41:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.979 08:41:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:39.979 08:41:10 thread -- scripts/common.sh@345 -- # : 1 00:07:39.979 08:41:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.979 08:41:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.979 08:41:10 thread -- scripts/common.sh@365 -- # decimal 1 00:07:39.979 08:41:10 thread -- scripts/common.sh@353 -- # local d=1 00:07:39.979 08:41:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.979 08:41:10 thread -- scripts/common.sh@355 -- # echo 1 00:07:39.979 08:41:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.979 08:41:10 thread -- scripts/common.sh@366 -- # decimal 2 00:07:39.979 08:41:10 thread -- scripts/common.sh@353 -- # local d=2 00:07:39.979 08:41:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.979 08:41:10 thread -- scripts/common.sh@355 -- # echo 2 00:07:39.979 08:41:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.979 08:41:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.979 08:41:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.979 08:41:10 thread -- scripts/common.sh@368 -- # return 0 00:07:39.979 08:41:10 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.979 08:41:10 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.979 --rc genhtml_branch_coverage=1 00:07:39.979 --rc genhtml_function_coverage=1 00:07:39.979 --rc genhtml_legend=1 00:07:39.979 --rc geninfo_all_blocks=1 00:07:39.979 --rc geninfo_unexecuted_blocks=1 00:07:39.979 00:07:39.979 ' 00:07:39.979 08:41:10 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.979 --rc genhtml_branch_coverage=1 00:07:39.979 --rc genhtml_function_coverage=1 00:07:39.979 --rc genhtml_legend=1 00:07:39.979 --rc geninfo_all_blocks=1 00:07:39.979 --rc geninfo_unexecuted_blocks=1 00:07:39.979 00:07:39.979 ' 00:07:39.980 08:41:10 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.980 --rc genhtml_branch_coverage=1 00:07:39.980 --rc genhtml_function_coverage=1 00:07:39.980 --rc genhtml_legend=1 00:07:39.980 --rc geninfo_all_blocks=1 00:07:39.980 --rc geninfo_unexecuted_blocks=1 00:07:39.980 00:07:39.980 ' 00:07:39.980 08:41:10 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.980 --rc genhtml_branch_coverage=1 00:07:39.980 --rc genhtml_function_coverage=1 00:07:39.980 --rc genhtml_legend=1 00:07:39.980 --rc geninfo_all_blocks=1 00:07:39.980 --rc geninfo_unexecuted_blocks=1 00:07:39.980 00:07:39.980 ' 00:07:39.980 08:41:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:39.980 08:41:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:39.980 08:41:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.980 08:41:10 thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.980 ************************************ 00:07:39.980 START TEST thread_poller_perf 00:07:39.980 ************************************ 00:07:39.980 08:41:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:39.980 [2024-11-20 08:41:10.814597] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:39.980 [2024-11-20 08:41:10.814983] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59291 ] 00:07:40.238 [2024-11-20 08:41:10.961084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.238 [2024-11-20 08:41:11.039070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.238 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:41.615 [2024-11-20T08:41:12.530Z] ====================================== 00:07:41.615 [2024-11-20T08:41:12.530Z] busy:2205940239 (cyc) 00:07:41.615 [2024-11-20T08:41:12.530Z] total_run_count: 312000 00:07:41.615 [2024-11-20T08:41:12.530Z] tsc_hz: 2200000000 (cyc) 00:07:41.615 [2024-11-20T08:41:12.530Z] ====================================== 00:07:41.615 [2024-11-20T08:41:12.530Z] poller_cost: 7070 (cyc), 3213 (nsec) 00:07:41.615 00:07:41.615 real 0m1.316s 00:07:41.615 user 0m1.165s 00:07:41.615 ************************************ 00:07:41.615 END TEST thread_poller_perf 00:07:41.615 ************************************ 00:07:41.615 sys 0m0.042s 00:07:41.615 08:41:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.615 08:41:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:41.615 08:41:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:41.615 08:41:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:41.615 08:41:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.615 08:41:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.615 ************************************ 00:07:41.615 START TEST thread_poller_perf 00:07:41.615 ************************************ 00:07:41.615 08:41:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:41.615 [2024-11-20 08:41:12.185176] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:41.615 [2024-11-20 08:41:12.185306] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59321 ] 00:07:41.615 [2024-11-20 08:41:12.338317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.615 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:41.615 [2024-11-20 08:41:12.424485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.591 [2024-11-20T08:41:13.506Z] ====================================== 00:07:42.591 [2024-11-20T08:41:13.506Z] busy:2202612124 (cyc) 00:07:42.591 [2024-11-20T08:41:13.506Z] total_run_count: 3960000 00:07:42.591 [2024-11-20T08:41:13.506Z] tsc_hz: 2200000000 (cyc) 00:07:42.591 [2024-11-20T08:41:13.506Z] ====================================== 00:07:42.591 [2024-11-20T08:41:13.506Z] poller_cost: 556 (cyc), 252 (nsec) 00:07:42.591 00:07:42.591 real 0m1.327s 00:07:42.591 user 0m1.167s 00:07:42.591 sys 0m0.049s 00:07:42.591 ************************************ 00:07:42.591 END TEST thread_poller_perf 00:07:42.591 ************************************ 00:07:42.591 08:41:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.591 08:41:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:42.850 08:41:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:42.850 ************************************ 00:07:42.850 END TEST thread 00:07:42.850 ************************************ 00:07:42.850 00:07:42.850 real 0m2.941s 00:07:42.850 user 0m2.478s 00:07:42.850 sys 0m0.248s 00:07:42.850 08:41:13 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.850 08:41:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:42.850 08:41:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:42.850 08:41:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:42.850 08:41:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.850 08:41:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.850 08:41:13 -- common/autotest_common.sh@10 -- # set +x 00:07:42.850 ************************************ 00:07:42.850 START TEST app_cmdline 00:07:42.850 ************************************ 00:07:42.850 08:41:13 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:42.850 * Looking for test storage... 00:07:42.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:42.850 08:41:13 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.850 08:41:13 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.850 08:41:13 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:43.108 08:41:13 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.108 08:41:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:43.109 08:41:13 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.109 08:41:13 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:43.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.109 --rc genhtml_branch_coverage=1 00:07:43.109 --rc genhtml_function_coverage=1 00:07:43.109 --rc genhtml_legend=1 00:07:43.109 --rc geninfo_all_blocks=1 00:07:43.109 --rc geninfo_unexecuted_blocks=1 00:07:43.109 00:07:43.109 ' 00:07:43.109 08:41:13 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:43.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.109 --rc genhtml_branch_coverage=1 00:07:43.109 --rc genhtml_function_coverage=1 00:07:43.109 --rc genhtml_legend=1 00:07:43.109 --rc geninfo_all_blocks=1 00:07:43.109 --rc geninfo_unexecuted_blocks=1 00:07:43.109 00:07:43.109 ' 00:07:43.109 08:41:13 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:43.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.109 --rc genhtml_branch_coverage=1 00:07:43.109 --rc genhtml_function_coverage=1 00:07:43.109 --rc genhtml_legend=1 00:07:43.109 --rc geninfo_all_blocks=1 00:07:43.109 --rc geninfo_unexecuted_blocks=1 00:07:43.109 00:07:43.109 ' 00:07:43.109 08:41:13 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:43.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.109 --rc genhtml_branch_coverage=1 00:07:43.109 --rc genhtml_function_coverage=1 00:07:43.109 --rc genhtml_legend=1 00:07:43.109 --rc geninfo_all_blocks=1 00:07:43.109 --rc geninfo_unexecuted_blocks=1 00:07:43.109 00:07:43.109 ' 00:07:43.109 08:41:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:43.109 08:41:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59409 00:07:43.109 08:41:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59409 00:07:43.109 08:41:13 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59409 ']' 00:07:43.109 08:41:13 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.109 08:41:13 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.109 08:41:13 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.109 08:41:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:43.109 08:41:13 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.109 08:41:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.109 [2024-11-20 08:41:13.856731] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:43.109 [2024-11-20 08:41:13.857873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59409 ] 00:07:43.109 [2024-11-20 08:41:14.015670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.367 [2024-11-20 08:41:14.103283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.367 [2024-11-20 08:41:14.209953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.302 08:41:14 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.302 08:41:14 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:44.302 08:41:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:44.302 { 00:07:44.302 "version": "SPDK v25.01-pre git sha1 6fc96a60f", 00:07:44.302 "fields": { 00:07:44.302 "major": 25, 00:07:44.302 "minor": 1, 00:07:44.302 "patch": 0, 00:07:44.302 "suffix": "-pre", 00:07:44.302 "commit": "6fc96a60f" 00:07:44.302 } 00:07:44.302 } 00:07:44.302 08:41:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:44.302 08:41:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:44.302 08:41:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:44.302 08:41:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:44.302 08:41:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:44.302 08:41:15 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.302 08:41:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.302 08:41:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:44.302 08:41:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:44.302 08:41:15 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.302 08:41:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:44.302 08:41:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:44.302 08:41:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.302 08:41:15 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:44.303 08:41:15 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.303 08:41:15 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.303 08:41:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.303 08:41:15 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.303 08:41:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.303 08:41:15 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.303 08:41:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.303 08:41:15 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:44.303 08:41:15 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:44.303 08:41:15 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.869 request: 00:07:44.869 { 00:07:44.869 "method": "env_dpdk_get_mem_stats", 00:07:44.869 "req_id": 1 00:07:44.869 } 00:07:44.869 Got JSON-RPC error response 00:07:44.869 response: 00:07:44.869 { 00:07:44.869 "code": -32601, 00:07:44.869 "message": "Method not found" 00:07:44.869 } 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.869 08:41:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59409 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59409 ']' 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59409 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59409 00:07:44.869 killing process with pid 59409 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59409' 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@973 -- # kill 59409 00:07:44.869 08:41:15 app_cmdline -- common/autotest_common.sh@978 -- # wait 59409 00:07:45.434 ************************************ 00:07:45.434 END TEST app_cmdline 00:07:45.434 ************************************ 00:07:45.434 00:07:45.434 real 0m2.512s 00:07:45.434 user 0m3.039s 00:07:45.434 sys 0m0.630s 00:07:45.434 08:41:16 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.434 08:41:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.434 08:41:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:45.434 08:41:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.434 08:41:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.434 08:41:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.434 ************************************ 00:07:45.434 START TEST version 00:07:45.434 ************************************ 00:07:45.435 08:41:16 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:45.435 * Looking for test storage... 00:07:45.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:45.435 08:41:16 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.435 08:41:16 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.435 08:41:16 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.693 08:41:16 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.693 08:41:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.693 08:41:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.693 08:41:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.693 08:41:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.693 08:41:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.693 08:41:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.693 08:41:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.693 08:41:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.693 08:41:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.693 08:41:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.693 08:41:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.693 08:41:16 version -- scripts/common.sh@344 -- # case "$op" in 00:07:45.693 08:41:16 version -- scripts/common.sh@345 -- # : 1 00:07:45.693 08:41:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.693 08:41:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.693 08:41:16 version -- scripts/common.sh@365 -- # decimal 1 00:07:45.693 08:41:16 version -- scripts/common.sh@353 -- # local d=1 00:07:45.693 08:41:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.693 08:41:16 version -- scripts/common.sh@355 -- # echo 1 00:07:45.693 08:41:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.693 08:41:16 version -- scripts/common.sh@366 -- # decimal 2 00:07:45.693 08:41:16 version -- scripts/common.sh@353 -- # local d=2 00:07:45.693 08:41:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.693 08:41:16 version -- scripts/common.sh@355 -- # echo 2 00:07:45.693 08:41:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.693 08:41:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.693 08:41:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.693 08:41:16 version -- scripts/common.sh@368 -- # return 0 00:07:45.693 08:41:16 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.693 08:41:16 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.693 --rc genhtml_branch_coverage=1 00:07:45.693 --rc genhtml_function_coverage=1 00:07:45.693 --rc genhtml_legend=1 00:07:45.694 --rc geninfo_all_blocks=1 00:07:45.694 --rc geninfo_unexecuted_blocks=1 00:07:45.694 00:07:45.694 ' 00:07:45.694 08:41:16 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.694 --rc genhtml_branch_coverage=1 00:07:45.694 --rc genhtml_function_coverage=1 00:07:45.694 --rc genhtml_legend=1 00:07:45.694 --rc geninfo_all_blocks=1 00:07:45.694 --rc geninfo_unexecuted_blocks=1 00:07:45.694 00:07:45.694 ' 00:07:45.694 08:41:16 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.694 --rc genhtml_branch_coverage=1 00:07:45.694 --rc genhtml_function_coverage=1 00:07:45.694 --rc genhtml_legend=1 00:07:45.694 --rc geninfo_all_blocks=1 00:07:45.694 --rc geninfo_unexecuted_blocks=1 00:07:45.694 00:07:45.694 ' 00:07:45.694 08:41:16 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.694 --rc genhtml_branch_coverage=1 00:07:45.694 --rc genhtml_function_coverage=1 00:07:45.694 --rc genhtml_legend=1 00:07:45.694 --rc geninfo_all_blocks=1 00:07:45.694 --rc geninfo_unexecuted_blocks=1 00:07:45.694 00:07:45.694 ' 00:07:45.694 08:41:16 version -- app/version.sh@17 -- # get_header_version major 00:07:45.694 08:41:16 version -- app/version.sh@14 -- # cut -f2 00:07:45.694 08:41:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.694 08:41:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:45.694 08:41:16 version -- app/version.sh@17 -- # major=25 00:07:45.694 08:41:16 version -- app/version.sh@18 -- # get_header_version minor 00:07:45.694 08:41:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:45.694 08:41:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.694 08:41:16 version -- app/version.sh@14 -- # cut -f2 00:07:45.694 08:41:16 version -- app/version.sh@18 -- # minor=1 00:07:45.694 08:41:16 version -- app/version.sh@19 -- # get_header_version patch 00:07:45.694 08:41:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:45.694 08:41:16 version -- app/version.sh@14 -- # cut -f2 00:07:45.694 08:41:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.694 08:41:16 version -- app/version.sh@19 -- # patch=0 00:07:45.694 08:41:16 version -- app/version.sh@20 -- # get_header_version suffix 00:07:45.694 08:41:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:45.694 08:41:16 version -- app/version.sh@14 -- # cut -f2 00:07:45.694 08:41:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.694 08:41:16 version -- app/version.sh@20 -- # suffix=-pre 00:07:45.694 08:41:16 version -- app/version.sh@22 -- # version=25.1 00:07:45.694 08:41:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:45.694 08:41:16 version -- app/version.sh@28 -- # version=25.1rc0 00:07:45.694 08:41:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:45.694 08:41:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:45.694 08:41:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:45.694 08:41:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:45.694 00:07:45.694 real 0m0.301s 00:07:45.694 user 0m0.197s 00:07:45.694 sys 0m0.134s 00:07:45.694 ************************************ 00:07:45.694 END TEST version 00:07:45.694 ************************************ 00:07:45.694 08:41:16 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.694 08:41:16 version -- common/autotest_common.sh@10 -- # set +x 00:07:45.694 08:41:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:45.694 08:41:16 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:45.694 08:41:16 -- spdk/autotest.sh@194 -- # uname -s 00:07:45.694 08:41:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:45.694 08:41:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:45.694 08:41:16 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:45.694 08:41:16 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:45.694 08:41:16 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:45.694 08:41:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.694 08:41:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.694 08:41:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.694 ************************************ 00:07:45.694 START TEST spdk_dd 00:07:45.694 ************************************ 00:07:45.694 08:41:16 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:45.694 * Looking for test storage... 00:07:45.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:45.694 08:41:16 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.694 08:41:16 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.694 08:41:16 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.953 08:41:16 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:45.953 08:41:16 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.953 08:41:16 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.953 --rc genhtml_branch_coverage=1 00:07:45.953 --rc genhtml_function_coverage=1 00:07:45.953 --rc genhtml_legend=1 00:07:45.953 --rc geninfo_all_blocks=1 00:07:45.953 --rc geninfo_unexecuted_blocks=1 00:07:45.953 00:07:45.953 ' 00:07:45.953 08:41:16 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.953 --rc genhtml_branch_coverage=1 00:07:45.953 --rc genhtml_function_coverage=1 00:07:45.953 --rc genhtml_legend=1 00:07:45.953 --rc geninfo_all_blocks=1 00:07:45.953 --rc geninfo_unexecuted_blocks=1 00:07:45.953 00:07:45.953 ' 00:07:45.953 08:41:16 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.953 --rc genhtml_branch_coverage=1 00:07:45.953 --rc genhtml_function_coverage=1 00:07:45.953 --rc genhtml_legend=1 00:07:45.953 --rc geninfo_all_blocks=1 00:07:45.953 --rc geninfo_unexecuted_blocks=1 00:07:45.953 00:07:45.953 ' 00:07:45.953 08:41:16 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.953 --rc genhtml_branch_coverage=1 00:07:45.953 --rc genhtml_function_coverage=1 00:07:45.953 --rc genhtml_legend=1 00:07:45.953 --rc geninfo_all_blocks=1 00:07:45.953 --rc geninfo_unexecuted_blocks=1 00:07:45.953 00:07:45.953 ' 00:07:45.953 08:41:16 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.953 08:41:16 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.953 08:41:16 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.953 08:41:16 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.953 08:41:16 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.953 08:41:16 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:45.953 08:41:16 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.953 08:41:16 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:46.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.211 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:46.211 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:46.471 08:41:17 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:46.471 08:41:17 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:46.471 08:41:17 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:46.471 08:41:17 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.471 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:46.472 * spdk_dd linked to liburing 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:46.472 08:41:17 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:46.472 08:41:17 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:46.473 08:41:17 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:46.473 08:41:17 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:46.473 08:41:17 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:46.473 08:41:17 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:46.473 08:41:17 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:46.473 08:41:17 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:46.473 08:41:17 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:46.473 08:41:17 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:46.473 08:41:17 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.473 08:41:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:46.473 ************************************ 00:07:46.473 START TEST spdk_dd_basic_rw 00:07:46.473 ************************************ 00:07:46.473 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:46.473 * Looking for test storage... 00:07:46.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:46.473 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.473 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.473 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.732 --rc genhtml_branch_coverage=1 00:07:46.732 --rc genhtml_function_coverage=1 00:07:46.732 --rc genhtml_legend=1 00:07:46.732 --rc geninfo_all_blocks=1 00:07:46.732 --rc geninfo_unexecuted_blocks=1 00:07:46.732 00:07:46.732 ' 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.732 --rc genhtml_branch_coverage=1 00:07:46.732 --rc genhtml_function_coverage=1 00:07:46.732 --rc genhtml_legend=1 00:07:46.732 --rc geninfo_all_blocks=1 00:07:46.732 --rc geninfo_unexecuted_blocks=1 00:07:46.732 00:07:46.732 ' 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.732 --rc genhtml_branch_coverage=1 00:07:46.732 --rc genhtml_function_coverage=1 00:07:46.732 --rc genhtml_legend=1 00:07:46.732 --rc geninfo_all_blocks=1 00:07:46.732 --rc geninfo_unexecuted_blocks=1 00:07:46.732 00:07:46.732 ' 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.732 --rc genhtml_branch_coverage=1 00:07:46.732 --rc genhtml_function_coverage=1 00:07:46.732 --rc genhtml_legend=1 00:07:46.732 --rc geninfo_all_blocks=1 00:07:46.732 --rc geninfo_unexecuted_blocks=1 00:07:46.732 00:07:46.732 ' 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.732 08:41:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:46.733 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:46.993 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:46.993 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.994 ************************************ 00:07:46.994 START TEST dd_bs_lt_native_bs 00:07:46.994 ************************************ 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.994 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.995 08:41:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:46.995 { 00:07:46.995 "subsystems": [ 00:07:46.995 { 00:07:46.995 "subsystem": "bdev", 00:07:46.995 "config": [ 00:07:46.995 { 00:07:46.995 "params": { 00:07:46.995 "trtype": "pcie", 00:07:46.995 "traddr": "0000:00:10.0", 00:07:46.995 "name": "Nvme0" 00:07:46.995 }, 00:07:46.995 "method": "bdev_nvme_attach_controller" 00:07:46.995 }, 00:07:46.995 { 00:07:46.995 "method": "bdev_wait_for_examine" 00:07:46.995 } 00:07:46.995 ] 00:07:46.995 } 00:07:46.995 ] 00:07:46.995 } 00:07:46.995 [2024-11-20 08:41:17.753329] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:46.995 [2024-11-20 08:41:17.753434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59766 ] 00:07:46.995 [2024-11-20 08:41:17.899966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.279 [2024-11-20 08:41:17.993037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.279 [2024-11-20 08:41:18.071790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.548 [2024-11-20 08:41:18.191898] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:47.548 [2024-11-20 08:41:18.191976] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.548 [2024-11-20 08:41:18.363248] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:47.548 ************************************ 00:07:47.548 END TEST dd_bs_lt_native_bs 00:07:47.548 ************************************ 00:07:47.548 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:47.548 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:47.548 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:47.548 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:47.548 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:47.548 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:47.548 00:07:47.548 real 0m0.749s 00:07:47.548 user 0m0.520s 00:07:47.548 sys 0m0.191s 00:07:47.548 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.548 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.806 ************************************ 00:07:47.806 START TEST dd_rw 00:07:47.806 ************************************ 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:47.806 08:41:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.373 08:41:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:48.373 08:41:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:48.373 08:41:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.373 08:41:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.373 { 00:07:48.373 "subsystems": [ 00:07:48.373 { 00:07:48.373 "subsystem": "bdev", 00:07:48.373 "config": [ 00:07:48.373 { 00:07:48.373 "params": { 00:07:48.373 "trtype": "pcie", 00:07:48.373 "traddr": "0000:00:10.0", 00:07:48.373 "name": "Nvme0" 00:07:48.373 }, 00:07:48.373 "method": "bdev_nvme_attach_controller" 00:07:48.373 }, 00:07:48.373 { 00:07:48.373 "method": "bdev_wait_for_examine" 00:07:48.373 } 00:07:48.373 ] 00:07:48.373 } 00:07:48.373 ] 00:07:48.373 } 00:07:48.373 [2024-11-20 08:41:19.194177] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:48.373 [2024-11-20 08:41:19.194462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59797 ] 00:07:48.632 [2024-11-20 08:41:19.348081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.632 [2024-11-20 08:41:19.433477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.632 [2024-11-20 08:41:19.509361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.889  [2024-11-20T08:41:20.063Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:49.148 00:07:49.148 08:41:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:49.148 08:41:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:49.148 08:41:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.148 08:41:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.148 { 00:07:49.148 "subsystems": [ 00:07:49.148 { 00:07:49.148 "subsystem": "bdev", 00:07:49.148 "config": [ 00:07:49.148 { 00:07:49.148 "params": { 00:07:49.148 "trtype": "pcie", 00:07:49.148 "traddr": "0000:00:10.0", 00:07:49.148 "name": "Nvme0" 00:07:49.148 }, 00:07:49.148 "method": "bdev_nvme_attach_controller" 00:07:49.148 }, 00:07:49.148 { 00:07:49.148 "method": "bdev_wait_for_examine" 00:07:49.148 } 00:07:49.148 ] 00:07:49.148 } 00:07:49.148 ] 00:07:49.148 } 00:07:49.148 [2024-11-20 08:41:19.955413] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:49.148 [2024-11-20 08:41:19.955528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59816 ] 00:07:49.406 [2024-11-20 08:41:20.106259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.406 [2024-11-20 08:41:20.193033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.406 [2024-11-20 08:41:20.270194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.664  [2024-11-20T08:41:20.837Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:49.922 00:07:49.922 08:41:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.922 08:41:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:49.922 08:41:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:49.922 08:41:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:49.922 08:41:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:49.922 08:41:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:49.922 08:41:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:49.922 08:41:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:49.922 08:41:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:49.922 08:41:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.922 08:41:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.922 { 00:07:49.922 "subsystems": [ 00:07:49.922 { 00:07:49.922 "subsystem": "bdev", 00:07:49.922 "config": [ 00:07:49.922 { 00:07:49.922 "params": { 00:07:49.922 "trtype": "pcie", 00:07:49.922 "traddr": "0000:00:10.0", 00:07:49.922 "name": "Nvme0" 00:07:49.922 }, 00:07:49.922 "method": "bdev_nvme_attach_controller" 00:07:49.922 }, 00:07:49.922 { 00:07:49.922 "method": "bdev_wait_for_examine" 00:07:49.922 } 00:07:49.922 ] 00:07:49.922 } 00:07:49.922 ] 00:07:49.922 } 00:07:49.922 [2024-11-20 08:41:20.725445] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:49.922 [2024-11-20 08:41:20.725712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59837 ] 00:07:50.180 [2024-11-20 08:41:20.878038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.180 [2024-11-20 08:41:20.957358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.180 [2024-11-20 08:41:21.031083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.438  [2024-11-20T08:41:21.612Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:50.697 00:07:50.697 08:41:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:50.697 08:41:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:50.697 08:41:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:50.697 08:41:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:50.697 08:41:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:50.697 08:41:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:50.697 08:41:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:51.270 08:41:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:51.270 08:41:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:51.270 08:41:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:51.270 08:41:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:51.270 [2024-11-20 08:41:22.121507] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:51.270 [2024-11-20 08:41:22.121622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59858 ] 00:07:51.270 { 00:07:51.270 "subsystems": [ 00:07:51.270 { 00:07:51.270 "subsystem": "bdev", 00:07:51.271 "config": [ 00:07:51.271 { 00:07:51.271 "params": { 00:07:51.271 "trtype": "pcie", 00:07:51.271 "traddr": "0000:00:10.0", 00:07:51.271 "name": "Nvme0" 00:07:51.271 }, 00:07:51.271 "method": "bdev_nvme_attach_controller" 00:07:51.271 }, 00:07:51.271 { 00:07:51.271 "method": "bdev_wait_for_examine" 00:07:51.271 } 00:07:51.271 ] 00:07:51.271 } 00:07:51.271 ] 00:07:51.271 } 00:07:51.529 [2024-11-20 08:41:22.271501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.529 [2024-11-20 08:41:22.357271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.529 [2024-11-20 08:41:22.432238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.788  [2024-11-20T08:41:22.961Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:52.047 00:07:52.047 08:41:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:52.047 08:41:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:52.047 08:41:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:52.047 08:41:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.047 [2024-11-20 08:41:22.860636] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:52.047 [2024-11-20 08:41:22.860735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59877 ] 00:07:52.047 { 00:07:52.047 "subsystems": [ 00:07:52.047 { 00:07:52.047 "subsystem": "bdev", 00:07:52.047 "config": [ 00:07:52.047 { 00:07:52.047 "params": { 00:07:52.047 "trtype": "pcie", 00:07:52.047 "traddr": "0000:00:10.0", 00:07:52.047 "name": "Nvme0" 00:07:52.047 }, 00:07:52.047 "method": "bdev_nvme_attach_controller" 00:07:52.047 }, 00:07:52.047 { 00:07:52.047 "method": "bdev_wait_for_examine" 00:07:52.047 } 00:07:52.047 ] 00:07:52.047 } 00:07:52.047 ] 00:07:52.047 } 00:07:52.305 [2024-11-20 08:41:23.007348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.305 [2024-11-20 08:41:23.093175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.305 [2024-11-20 08:41:23.168293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.563  [2024-11-20T08:41:23.737Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:52.822 00:07:52.822 08:41:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.822 08:41:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:52.822 08:41:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:52.822 08:41:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:52.822 08:41:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:52.822 08:41:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:52.822 08:41:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:52.822 08:41:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:52.822 08:41:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:52.822 08:41:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:52.822 08:41:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.822 { 00:07:52.822 "subsystems": [ 00:07:52.822 { 00:07:52.822 "subsystem": "bdev", 00:07:52.822 "config": [ 00:07:52.822 { 00:07:52.822 "params": { 00:07:52.822 "trtype": "pcie", 00:07:52.822 "traddr": "0000:00:10.0", 00:07:52.822 "name": "Nvme0" 00:07:52.822 }, 00:07:52.822 "method": "bdev_nvme_attach_controller" 00:07:52.822 }, 00:07:52.822 { 00:07:52.822 "method": "bdev_wait_for_examine" 00:07:52.822 } 00:07:52.822 ] 00:07:52.822 } 00:07:52.822 ] 00:07:52.822 } 00:07:52.822 [2024-11-20 08:41:23.608748] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:52.822 [2024-11-20 08:41:23.608871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59888 ] 00:07:53.081 [2024-11-20 08:41:23.757970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.081 [2024-11-20 08:41:23.836026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.081 [2024-11-20 08:41:23.910383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.338  [2024-11-20T08:41:24.512Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:53.597 00:07:53.597 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:53.597 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:53.597 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:53.597 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:53.597 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:53.597 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:53.597 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:53.597 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.165 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:54.165 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:54.165 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:54.165 08:41:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.165 [2024-11-20 08:41:24.926378] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:54.165 [2024-11-20 08:41:24.926495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59917 ] 00:07:54.165 { 00:07:54.165 "subsystems": [ 00:07:54.165 { 00:07:54.165 "subsystem": "bdev", 00:07:54.165 "config": [ 00:07:54.165 { 00:07:54.165 "params": { 00:07:54.165 "trtype": "pcie", 00:07:54.165 "traddr": "0000:00:10.0", 00:07:54.165 "name": "Nvme0" 00:07:54.165 }, 00:07:54.165 "method": "bdev_nvme_attach_controller" 00:07:54.165 }, 00:07:54.165 { 00:07:54.165 "method": "bdev_wait_for_examine" 00:07:54.165 } 00:07:54.165 ] 00:07:54.165 } 00:07:54.165 ] 00:07:54.165 } 00:07:54.165 [2024-11-20 08:41:25.076255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.424 [2024-11-20 08:41:25.173749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.424 [2024-11-20 08:41:25.248676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.683  [2024-11-20T08:41:25.856Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:54.941 00:07:54.941 08:41:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:54.941 08:41:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:54.941 08:41:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:54.941 08:41:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.941 [2024-11-20 08:41:25.679547] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:54.941 [2024-11-20 08:41:25.679660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59931 ] 00:07:54.941 { 00:07:54.941 "subsystems": [ 00:07:54.941 { 00:07:54.941 "subsystem": "bdev", 00:07:54.941 "config": [ 00:07:54.941 { 00:07:54.941 "params": { 00:07:54.941 "trtype": "pcie", 00:07:54.941 "traddr": "0000:00:10.0", 00:07:54.941 "name": "Nvme0" 00:07:54.941 }, 00:07:54.941 "method": "bdev_nvme_attach_controller" 00:07:54.941 }, 00:07:54.941 { 00:07:54.941 "method": "bdev_wait_for_examine" 00:07:54.941 } 00:07:54.941 ] 00:07:54.941 } 00:07:54.941 ] 00:07:54.941 } 00:07:54.941 [2024-11-20 08:41:25.826824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.200 [2024-11-20 08:41:25.907523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.200 [2024-11-20 08:41:25.980948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.200  [2024-11-20T08:41:26.373Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:55.458 00:07:55.458 08:41:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.716 08:41:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:55.716 08:41:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:55.716 08:41:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:55.716 08:41:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:55.716 08:41:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:55.716 08:41:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:55.716 08:41:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:55.716 08:41:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:55.716 08:41:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:55.716 08:41:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:55.716 [2024-11-20 08:41:26.421704] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:55.716 [2024-11-20 08:41:26.421845] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59946 ] 00:07:55.716 { 00:07:55.716 "subsystems": [ 00:07:55.716 { 00:07:55.716 "subsystem": "bdev", 00:07:55.716 "config": [ 00:07:55.716 { 00:07:55.716 "params": { 00:07:55.716 "trtype": "pcie", 00:07:55.716 "traddr": "0000:00:10.0", 00:07:55.716 "name": "Nvme0" 00:07:55.716 }, 00:07:55.716 "method": "bdev_nvme_attach_controller" 00:07:55.716 }, 00:07:55.716 { 00:07:55.716 "method": "bdev_wait_for_examine" 00:07:55.716 } 00:07:55.716 ] 00:07:55.716 } 00:07:55.716 ] 00:07:55.716 } 00:07:55.716 [2024-11-20 08:41:26.566583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.973 [2024-11-20 08:41:26.646903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.973 [2024-11-20 08:41:26.720497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.973  [2024-11-20T08:41:27.145Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:56.230 00:07:56.230 08:41:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:56.230 08:41:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:56.230 08:41:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:56.231 08:41:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:56.231 08:41:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:56.231 08:41:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:56.231 08:41:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:56.810 08:41:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:56.810 08:41:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:56.810 08:41:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:56.810 08:41:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.066 [2024-11-20 08:41:27.752511] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:57.066 [2024-11-20 08:41:27.752623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59971 ] 00:07:57.066 { 00:07:57.066 "subsystems": [ 00:07:57.066 { 00:07:57.066 "subsystem": "bdev", 00:07:57.066 "config": [ 00:07:57.066 { 00:07:57.066 "params": { 00:07:57.066 "trtype": "pcie", 00:07:57.066 "traddr": "0000:00:10.0", 00:07:57.066 "name": "Nvme0" 00:07:57.066 }, 00:07:57.066 "method": "bdev_nvme_attach_controller" 00:07:57.066 }, 00:07:57.066 { 00:07:57.066 "method": "bdev_wait_for_examine" 00:07:57.066 } 00:07:57.066 ] 00:07:57.066 } 00:07:57.066 ] 00:07:57.066 } 00:07:57.066 [2024-11-20 08:41:27.898968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.323 [2024-11-20 08:41:27.979242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.323 [2024-11-20 08:41:28.052373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.323  [2024-11-20T08:41:28.495Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:57.580 00:07:57.580 08:41:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:57.580 08:41:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:57.580 08:41:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:57.580 08:41:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.838 { 00:07:57.838 "subsystems": [ 00:07:57.838 { 00:07:57.838 "subsystem": "bdev", 00:07:57.838 "config": [ 00:07:57.838 { 00:07:57.838 "params": { 00:07:57.838 "trtype": "pcie", 00:07:57.838 "traddr": "0000:00:10.0", 00:07:57.838 "name": "Nvme0" 00:07:57.838 }, 00:07:57.838 "method": "bdev_nvme_attach_controller" 00:07:57.838 }, 00:07:57.838 { 00:07:57.838 "method": "bdev_wait_for_examine" 00:07:57.838 } 00:07:57.838 ] 00:07:57.838 } 00:07:57.838 ] 00:07:57.838 } 00:07:57.838 [2024-11-20 08:41:28.495994] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:57.838 [2024-11-20 08:41:28.496106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59984 ] 00:07:57.838 [2024-11-20 08:41:28.645924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.838 [2024-11-20 08:41:28.726964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.096 [2024-11-20 08:41:28.800440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.096  [2024-11-20T08:41:29.270Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:58.355 00:07:58.355 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.355 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:58.355 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:58.355 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:58.355 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:58.355 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:58.355 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:58.355 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:58.355 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:58.355 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:58.355 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:58.355 { 00:07:58.355 "subsystems": [ 00:07:58.355 { 00:07:58.355 "subsystem": "bdev", 00:07:58.355 "config": [ 00:07:58.355 { 00:07:58.355 "params": { 00:07:58.355 "trtype": "pcie", 00:07:58.355 "traddr": "0000:00:10.0", 00:07:58.355 "name": "Nvme0" 00:07:58.355 }, 00:07:58.355 "method": "bdev_nvme_attach_controller" 00:07:58.355 }, 00:07:58.355 { 00:07:58.355 "method": "bdev_wait_for_examine" 00:07:58.355 } 00:07:58.355 ] 00:07:58.355 } 00:07:58.355 ] 00:07:58.355 } 00:07:58.355 [2024-11-20 08:41:29.249819] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:58.355 [2024-11-20 08:41:29.249944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60008 ] 00:07:58.613 [2024-11-20 08:41:29.401822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.614 [2024-11-20 08:41:29.487049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.872 [2024-11-20 08:41:29.562302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.872  [2024-11-20T08:41:30.047Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:59.132 00:07:59.132 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:59.132 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:59.132 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:59.132 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:59.132 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:59.132 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:59.132 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:59.132 08:41:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.698 08:41:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:59.699 08:41:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:59.699 08:41:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:59.699 08:41:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.699 [2024-11-20 08:41:30.584879] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:59.699 [2024-11-20 08:41:30.585258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60029 ] 00:07:59.699 { 00:07:59.699 "subsystems": [ 00:07:59.699 { 00:07:59.699 "subsystem": "bdev", 00:07:59.699 "config": [ 00:07:59.699 { 00:07:59.699 "params": { 00:07:59.699 "trtype": "pcie", 00:07:59.699 "traddr": "0000:00:10.0", 00:07:59.699 "name": "Nvme0" 00:07:59.699 }, 00:07:59.699 "method": "bdev_nvme_attach_controller" 00:07:59.699 }, 00:07:59.699 { 00:07:59.699 "method": "bdev_wait_for_examine" 00:07:59.699 } 00:07:59.699 ] 00:07:59.699 } 00:07:59.699 ] 00:07:59.699 } 00:07:59.956 [2024-11-20 08:41:30.729042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.956 [2024-11-20 08:41:30.808727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.214 [2024-11-20 08:41:30.880838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.214  [2024-11-20T08:41:31.388Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:00.473 00:08:00.473 08:41:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:00.473 08:41:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:00.473 08:41:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:00.474 08:41:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.474 { 00:08:00.474 "subsystems": [ 00:08:00.474 { 00:08:00.474 "subsystem": "bdev", 00:08:00.474 "config": [ 00:08:00.474 { 00:08:00.474 "params": { 00:08:00.474 "trtype": "pcie", 00:08:00.474 "traddr": "0000:00:10.0", 00:08:00.474 "name": "Nvme0" 00:08:00.474 }, 00:08:00.474 "method": "bdev_nvme_attach_controller" 00:08:00.474 }, 00:08:00.474 { 00:08:00.474 "method": "bdev_wait_for_examine" 00:08:00.474 } 00:08:00.474 ] 00:08:00.474 } 00:08:00.474 ] 00:08:00.474 } 00:08:00.474 [2024-11-20 08:41:31.321078] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:00.474 [2024-11-20 08:41:31.321212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60048 ] 00:08:00.732 [2024-11-20 08:41:31.468657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.732 [2024-11-20 08:41:31.561501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.732 [2024-11-20 08:41:31.632672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.990  [2024-11-20T08:41:32.163Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:01.248 00:08:01.248 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.248 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:01.248 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:01.248 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:01.248 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:01.248 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:01.248 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:01.248 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:01.248 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:01.248 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:01.248 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:01.248 [2024-11-20 08:41:32.077957] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:01.248 [2024-11-20 08:41:32.078375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60064 ] 00:08:01.248 { 00:08:01.248 "subsystems": [ 00:08:01.248 { 00:08:01.248 "subsystem": "bdev", 00:08:01.248 "config": [ 00:08:01.248 { 00:08:01.248 "params": { 00:08:01.248 "trtype": "pcie", 00:08:01.248 "traddr": "0000:00:10.0", 00:08:01.248 "name": "Nvme0" 00:08:01.248 }, 00:08:01.248 "method": "bdev_nvme_attach_controller" 00:08:01.248 }, 00:08:01.248 { 00:08:01.248 "method": "bdev_wait_for_examine" 00:08:01.248 } 00:08:01.248 ] 00:08:01.248 } 00:08:01.248 ] 00:08:01.248 } 00:08:01.507 [2024-11-20 08:41:32.230666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.507 [2024-11-20 08:41:32.314955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.507 [2024-11-20 08:41:32.376182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.764  [2024-11-20T08:41:32.938Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:02.023 00:08:02.023 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:02.023 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:02.023 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:02.023 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:02.023 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:02.023 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:02.023 08:41:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.590 08:41:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:02.590 08:41:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:02.590 08:41:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:02.590 08:41:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.590 { 00:08:02.590 "subsystems": [ 00:08:02.590 { 00:08:02.590 "subsystem": "bdev", 00:08:02.590 "config": [ 00:08:02.590 { 00:08:02.590 "params": { 00:08:02.590 "trtype": "pcie", 00:08:02.590 "traddr": "0000:00:10.0", 00:08:02.590 "name": "Nvme0" 00:08:02.590 }, 00:08:02.590 "method": "bdev_nvme_attach_controller" 00:08:02.590 }, 00:08:02.590 { 00:08:02.590 "method": "bdev_wait_for_examine" 00:08:02.590 } 00:08:02.590 ] 00:08:02.590 } 00:08:02.590 ] 00:08:02.590 } 00:08:02.590 [2024-11-20 08:41:33.350387] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:02.590 [2024-11-20 08:41:33.350521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60088 ] 00:08:02.591 [2024-11-20 08:41:33.497415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.849 [2024-11-20 08:41:33.599830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.849 [2024-11-20 08:41:33.674671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.107  [2024-11-20T08:41:34.281Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:03.366 00:08:03.366 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:03.366 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:03.366 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.366 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.366 { 00:08:03.366 "subsystems": [ 00:08:03.366 { 00:08:03.366 "subsystem": "bdev", 00:08:03.366 "config": [ 00:08:03.366 { 00:08:03.366 "params": { 00:08:03.366 "trtype": "pcie", 00:08:03.366 "traddr": "0000:00:10.0", 00:08:03.366 "name": "Nvme0" 00:08:03.366 }, 00:08:03.366 "method": "bdev_nvme_attach_controller" 00:08:03.366 }, 00:08:03.366 { 00:08:03.366 "method": "bdev_wait_for_examine" 00:08:03.366 } 00:08:03.366 ] 00:08:03.366 } 00:08:03.366 ] 00:08:03.366 } 00:08:03.366 [2024-11-20 08:41:34.116212] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:03.366 [2024-11-20 08:41:34.116360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60102 ] 00:08:03.366 [2024-11-20 08:41:34.264872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.624 [2024-11-20 08:41:34.343812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.624 [2024-11-20 08:41:34.414719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.883  [2024-11-20T08:41:34.798Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:03.883 00:08:04.142 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.142 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:04.142 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:04.142 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:04.142 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:04.142 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:04.142 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:04.142 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:04.142 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:04.142 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:04.142 08:41:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.142 { 00:08:04.142 "subsystems": [ 00:08:04.142 { 00:08:04.142 "subsystem": "bdev", 00:08:04.142 "config": [ 00:08:04.142 { 00:08:04.142 "params": { 00:08:04.142 "trtype": "pcie", 00:08:04.142 "traddr": "0000:00:10.0", 00:08:04.142 "name": "Nvme0" 00:08:04.142 }, 00:08:04.142 "method": "bdev_nvme_attach_controller" 00:08:04.142 }, 00:08:04.142 { 00:08:04.142 "method": "bdev_wait_for_examine" 00:08:04.142 } 00:08:04.142 ] 00:08:04.142 } 00:08:04.142 ] 00:08:04.142 } 00:08:04.142 [2024-11-20 08:41:34.857230] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:04.142 [2024-11-20 08:41:34.857347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:08:04.142 [2024-11-20 08:41:35.011153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.400 [2024-11-20 08:41:35.096311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.400 [2024-11-20 08:41:35.170912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.400  [2024-11-20T08:41:35.573Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:04.658 00:08:04.658 00:08:04.658 real 0m17.059s 00:08:04.658 user 0m12.530s 00:08:04.658 sys 0m6.807s 00:08:04.658 ************************************ 00:08:04.658 END TEST dd_rw 00:08:04.658 ************************************ 00:08:04.658 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.658 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.916 08:41:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:04.916 08:41:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.916 08:41:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.916 08:41:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.916 ************************************ 00:08:04.916 START TEST dd_rw_offset 00:08:04.916 ************************************ 00:08:04.916 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:08:04.916 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:04.916 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:04.916 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:04.916 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:04.916 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:04.917 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=7db6mpsiwhb0v0buzpr9etdnnycxik9rt0t2eu336h63sloov41exb5amzetg8233l7iev4rr3nay5vr27t1v9o4jwdxuwfugtmmqnexofi67tgb6oakjzs7sh6ia3znuyob4h99s3ra4yq7zvap0bh99o87pz9k1tr02ga9xj645cl3grlthh6ys9e5m8z36r7d0e435ci60ehdf2wlxyn3dmnzzs9cqilr6l3x1hsbjg9782htl5lhg2hwrrd9h7w56tkj4vcklc4dzxpg13zldwurlvmhbhkq60dtzfm4d6d4v2sl6fwmpz77gmnd3t2qstzc61mjlqufs3oy8ik8iqgso44u5xwbv1omobjim6ufpiz866os864qcqbqk1hmcdurnc64irn26lozy49q5rkp7f8e6l65dt7kh4g9ckhpcgs3p2spi9sbxn3ks285z94h1w68flxkzqta09m23i3ddcx81sfi5f0ws3367h2n6t356p8wldpbacyju7j3mrhrpxdveijqnjsyh22gnwb5jn7otrrtznzgw95o1ah211yh8h999clrxiegtot9y67r0xr9yn68rkkfd7bz366uee3sgbljkw9twb64769nb71kgegh0joi8fq9l181cw3tafymmc6om45zzpdop2t0zior67p1feefrnjcfweyrohqujvpkoz9l45f135oi56kd67vvuf0pylabdnid0i0r41ieavxphk2zpdbam4nzhvcwzjh74d6y2d9uhwlb3tlpf4dgp0d5nn9t84kuwf5x1zc1hn52spywecic1zczj5hgq0prmtcbjymkeglx2pl0fy0h9z0tz3tq1pqwquolw9hfrzjfffoj8t6js78fs67mxld33g986mxoxsgdddw927agm10dvqltpp7qizl9c0g1o19f8fspamoqqs01exo2ei4nsg3mv3lg32wkfa8s0212sagsca43h3iw1ckgwl3u49y0eu1pnzd4fuw3cw28zdofjc5f6ny1qjbyx9xv66muvz02yzm18eh1i9tu5mcuvuofsxa5z6415fuosirn8n5z3foa12gtttmz63x98nlzrttxi1srxci2i5ekkvehzo4irklbxre6kty8cbsyfi9cx6jpyo3v1zfdvlyukr8hdpudmb7ct9znt68gvg8d0kqe5415o4p0j5dfbz0kcfy7631u1j14iowtcyjka5n8diq1z0jkyy3lm0xsy9girhy4gl3oq8kyc1tteoz44gmoetjdwm4ezokzokp0f1xzgruw6zgipvv1irdr2gi9pyoerrrzr2si66etk32usmzkrzwnid7n0pfdymugdbepgcuwsbkbqkhu69usz1fq3kir87r5386gp376c88yfhp8gs83qoxvz67wjod7p221unfk7ggwyibwhfhhujd6wrn0pgh40szo53u8bj5vl0owwam9ol2i0w0wxfn721czovluccwu4skeoy6tjr0z0mf9cnfa6t7xhjjxd357ty8ic4i4n0wtnd2qge5l6sn6k2hri2y3eo1hbz29e30uqzhea1fjmvxklkr8sk06ekp5bmuiqo3pg51e7iqzd25eonyh6edwq2rv5uaa71c9dce0senmjinth7495m7fn175pxsmcw7d7ziaibm6s8qua09cf5ro2ve5df0n47i1hp437wesjuslf9xfxogi47xakxpntjmivdl2ijy274klplqe4bbrnbw9z0eji8mu74pm6j13wvsodrheoyuu9drocswfllyy0qlwsz83oqkdsatck1p26nj34d282w7f9afqnyhoglwhgdv4z28xo116pciryi05d6nyty39lg89j7nl2lfp5tnrz2av9szr2fgria6uk7zkteyb973kpkofml2aahid6vqzq4tona5w95tgsep1pmhhl9pyjjcynixiedigul09l60p4hgfgtls1ptvplds95or857sm4fybgm8h8hie7qbqxthnlitc280581rvs74nbzcelrx9wo8vy3ldhkmmk3pkm8krbgp40yq6eozhqht73wz3b3ekxzsidswql328zcbdwvq0ci2bgsmylq1dbh8530cyk6gely9y5jmx7yzhjfn3tdp8wuz69uy2p9mblgv72nlckg25n7u7u80f6ufdgpvibpsz1cio53juny3db1xu9z42l67ff03yv3jkh1pc9kvteh822jbbev2ngnkw91gxpfz7pw01s13hek6roq8q5f7u3hka68o2rwwnlo0anbby9fl6n701dinlx7dungpvhsd5g2o90tbl14xttfh6jscwoamusv835s5z97mksord5fukzlli6dd9jqflmr14m169oj6756lc246hqhh0aqfgystkbu7eoze4rigtlv0kdkyb6ilbiljcwltasy7mt2ma7975mansdzm018ufshwt0u4obcumlo4hslhqui4jb50a619wnbpahug1ii3g5ytsm4guqnn3i4ieal0umr2vwk3cfelvvfye2e6tdua96eft2ea9emi2okaczygu5trm56y518f9s4znobi0xgr1t9ewwwx5lo3a07brkvnilhadb8pcsqy7tzyae8ow4jre6fr7qubnidyh0paygfhqrhxmlxatg7eg3hlen8joxyxbvcd4wko2ax517ipn4h4nr33hiyc35difhtufbkowzhnv5it5wkecfiu91xfgf5qvf9nn88gh0uh62347e8by8bolm001et3gir55byg5hon2htnhpszyy2gg0o3cgnsghm94mgyj7rfguzc1xk44fjyiheti91ake3s2xy0u1szwvsjin2v27hvdpp0dh0lzpwpqhanqpqwphi64cblnz1met0omghesb12tjze0c5gk2u8n5a6jicqak9m7fifow2osn71lxhidr1uz17n61wyf1fz6nz4m0cc6cvd2zcnl2i1jov6hlh2otxdnllgjyndip4n6t0dfk32rhqg2btjpkkwtsh9t4fc9651qtg6bwbfyymqyww4a3oet07ubdukhsrf9721wxge6rv4953w4jfay3wpa64hb47yq98t66tvtbc95orduk2awexpgl4thx4oy3pxuwralcz5dcujtwwjy7peq7uksdqiqwm3f6ffsq4m8mf9bkfbdkvzd6sx4vnkhym5ddgz32d7wd4e4y7lb2xich392vj8m08upqkk3fa9scm0a1th5qf3wswajr2yvnqdq2pouegkyebcxcbo8j9fxa4io8rj17sz03ye16mnthffqyp3bwfkd18cvhmqn6iio31tt5ocmtzmtr1tl1kgumv66r6hyt12yi9pvc7588iqp12l34cs09z4m65r6ajc67zlh2zzmljg5cnctxbk8y3lki0b36d4ve0j98jejzjc0k83j82ju8tmgx4lyen1tfzrdjy8zarhd3rclljb36zyr1tq3ypbhs4km6t510i2zacryor9uch0otigwkc6yydoib8to0n5kq31plemft7wu9e7384wxpn97jumvqs13a7rw7audih5o4twoulb7z38ezqxv5mwt0f5g19b0k42maiyiw1g28i0b3nrk73hcitprykiteerylukinmh58iiqgjb35rgyfe57416ii0fxcdol19qm4ftt9600bhf8znbnjcny51sa38pc5958obtuzbfutyk8fdc2azyotzll09nzarmbdvtga5ecvp5vx9mro0e2zn7jxbgf2wewum642abq24dz2cbxos4q3gvw64b8trarw8ih3v6cpkr86rim0c4j38g0hkt4ht1w9qbqkz96npfbz9ozcypn8qki1tyfntcxiv3unchyzz44pllgl12ssqnvf6jliviqil70w6znbbh2wfd0vhibwzgukohs691mjvpw7w25y8kpi3b3p565tjwyblgkvm0yrcobv8viwtzenec8x34iw8qaylxl8d2ygq4gmda7o4d0ep3sflv99d59u9hheal6ao3uexjwi9x2hst7258ifkv387qv278eupgienajsc5v85634173lldn95 00:08:04.917 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:04.917 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:04.917 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:04.917 08:41:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:04.917 { 00:08:04.917 "subsystems": [ 00:08:04.917 { 00:08:04.917 "subsystem": "bdev", 00:08:04.917 "config": [ 00:08:04.917 { 00:08:04.917 "params": { 00:08:04.917 "trtype": "pcie", 00:08:04.917 "traddr": "0000:00:10.0", 00:08:04.917 "name": "Nvme0" 00:08:04.917 }, 00:08:04.917 "method": "bdev_nvme_attach_controller" 00:08:04.917 }, 00:08:04.917 { 00:08:04.917 "method": "bdev_wait_for_examine" 00:08:04.917 } 00:08:04.917 ] 00:08:04.917 } 00:08:04.917 ] 00:08:04.917 } 00:08:04.917 [2024-11-20 08:41:35.712813] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:04.917 [2024-11-20 08:41:35.712922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60153 ] 00:08:05.219 [2024-11-20 08:41:35.856999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.219 [2024-11-20 08:41:35.935719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.219 [2024-11-20 08:41:36.007962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.518  [2024-11-20T08:41:36.433Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:05.518 00:08:05.518 08:41:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:05.518 08:41:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:05.518 08:41:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:05.518 08:41:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:05.777 { 00:08:05.777 "subsystems": [ 00:08:05.777 { 00:08:05.777 "subsystem": "bdev", 00:08:05.777 "config": [ 00:08:05.777 { 00:08:05.777 "params": { 00:08:05.777 "trtype": "pcie", 00:08:05.777 "traddr": "0000:00:10.0", 00:08:05.777 "name": "Nvme0" 00:08:05.777 }, 00:08:05.777 "method": "bdev_nvme_attach_controller" 00:08:05.777 }, 00:08:05.777 { 00:08:05.777 "method": "bdev_wait_for_examine" 00:08:05.777 } 00:08:05.777 ] 00:08:05.777 } 00:08:05.777 ] 00:08:05.777 } 00:08:05.777 [2024-11-20 08:41:36.447168] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:05.777 [2024-11-20 08:41:36.447276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60171 ] 00:08:05.777 [2024-11-20 08:41:36.592724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.777 [2024-11-20 08:41:36.670626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.034 [2024-11-20 08:41:36.742279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.034  [2024-11-20T08:41:37.209Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:06.294 00:08:06.294 08:41:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:06.294 ************************************ 00:08:06.294 END TEST dd_rw_offset 00:08:06.294 ************************************ 00:08:06.294 08:41:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 7db6mpsiwhb0v0buzpr9etdnnycxik9rt0t2eu336h63sloov41exb5amzetg8233l7iev4rr3nay5vr27t1v9o4jwdxuwfugtmmqnexofi67tgb6oakjzs7sh6ia3znuyob4h99s3ra4yq7zvap0bh99o87pz9k1tr02ga9xj645cl3grlthh6ys9e5m8z36r7d0e435ci60ehdf2wlxyn3dmnzzs9cqilr6l3x1hsbjg9782htl5lhg2hwrrd9h7w56tkj4vcklc4dzxpg13zldwurlvmhbhkq60dtzfm4d6d4v2sl6fwmpz77gmnd3t2qstzc61mjlqufs3oy8ik8iqgso44u5xwbv1omobjim6ufpiz866os864qcqbqk1hmcdurnc64irn26lozy49q5rkp7f8e6l65dt7kh4g9ckhpcgs3p2spi9sbxn3ks285z94h1w68flxkzqta09m23i3ddcx81sfi5f0ws3367h2n6t356p8wldpbacyju7j3mrhrpxdveijqnjsyh22gnwb5jn7otrrtznzgw95o1ah211yh8h999clrxiegtot9y67r0xr9yn68rkkfd7bz366uee3sgbljkw9twb64769nb71kgegh0joi8fq9l181cw3tafymmc6om45zzpdop2t0zior67p1feefrnjcfweyrohqujvpkoz9l45f135oi56kd67vvuf0pylabdnid0i0r41ieavxphk2zpdbam4nzhvcwzjh74d6y2d9uhwlb3tlpf4dgp0d5nn9t84kuwf5x1zc1hn52spywecic1zczj5hgq0prmtcbjymkeglx2pl0fy0h9z0tz3tq1pqwquolw9hfrzjfffoj8t6js78fs67mxld33g986mxoxsgdddw927agm10dvqltpp7qizl9c0g1o19f8fspamoqqs01exo2ei4nsg3mv3lg32wkfa8s0212sagsca43h3iw1ckgwl3u49y0eu1pnzd4fuw3cw28zdofjc5f6ny1qjbyx9xv66muvz02yzm18eh1i9tu5mcuvuofsxa5z6415fuosirn8n5z3foa12gtttmz63x98nlzrttxi1srxci2i5ekkvehzo4irklbxre6kty8cbsyfi9cx6jpyo3v1zfdvlyukr8hdpudmb7ct9znt68gvg8d0kqe5415o4p0j5dfbz0kcfy7631u1j14iowtcyjka5n8diq1z0jkyy3lm0xsy9girhy4gl3oq8kyc1tteoz44gmoetjdwm4ezokzokp0f1xzgruw6zgipvv1irdr2gi9pyoerrrzr2si66etk32usmzkrzwnid7n0pfdymugdbepgcuwsbkbqkhu69usz1fq3kir87r5386gp376c88yfhp8gs83qoxvz67wjod7p221unfk7ggwyibwhfhhujd6wrn0pgh40szo53u8bj5vl0owwam9ol2i0w0wxfn721czovluccwu4skeoy6tjr0z0mf9cnfa6t7xhjjxd357ty8ic4i4n0wtnd2qge5l6sn6k2hri2y3eo1hbz29e30uqzhea1fjmvxklkr8sk06ekp5bmuiqo3pg51e7iqzd25eonyh6edwq2rv5uaa71c9dce0senmjinth7495m7fn175pxsmcw7d7ziaibm6s8qua09cf5ro2ve5df0n47i1hp437wesjuslf9xfxogi47xakxpntjmivdl2ijy274klplqe4bbrnbw9z0eji8mu74pm6j13wvsodrheoyuu9drocswfllyy0qlwsz83oqkdsatck1p26nj34d282w7f9afqnyhoglwhgdv4z28xo116pciryi05d6nyty39lg89j7nl2lfp5tnrz2av9szr2fgria6uk7zkteyb973kpkofml2aahid6vqzq4tona5w95tgsep1pmhhl9pyjjcynixiedigul09l60p4hgfgtls1ptvplds95or857sm4fybgm8h8hie7qbqxthnlitc280581rvs74nbzcelrx9wo8vy3ldhkmmk3pkm8krbgp40yq6eozhqht73wz3b3ekxzsidswql328zcbdwvq0ci2bgsmylq1dbh8530cyk6gely9y5jmx7yzhjfn3tdp8wuz69uy2p9mblgv72nlckg25n7u7u80f6ufdgpvibpsz1cio53juny3db1xu9z42l67ff03yv3jkh1pc9kvteh822jbbev2ngnkw91gxpfz7pw01s13hek6roq8q5f7u3hka68o2rwwnlo0anbby9fl6n701dinlx7dungpvhsd5g2o90tbl14xttfh6jscwoamusv835s5z97mksord5fukzlli6dd9jqflmr14m169oj6756lc246hqhh0aqfgystkbu7eoze4rigtlv0kdkyb6ilbiljcwltasy7mt2ma7975mansdzm018ufshwt0u4obcumlo4hslhqui4jb50a619wnbpahug1ii3g5ytsm4guqnn3i4ieal0umr2vwk3cfelvvfye2e6tdua96eft2ea9emi2okaczygu5trm56y518f9s4znobi0xgr1t9ewwwx5lo3a07brkvnilhadb8pcsqy7tzyae8ow4jre6fr7qubnidyh0paygfhqrhxmlxatg7eg3hlen8joxyxbvcd4wko2ax517ipn4h4nr33hiyc35difhtufbkowzhnv5it5wkecfiu91xfgf5qvf9nn88gh0uh62347e8by8bolm001et3gir55byg5hon2htnhpszyy2gg0o3cgnsghm94mgyj7rfguzc1xk44fjyiheti91ake3s2xy0u1szwvsjin2v27hvdpp0dh0lzpwpqhanqpqwphi64cblnz1met0omghesb12tjze0c5gk2u8n5a6jicqak9m7fifow2osn71lxhidr1uz17n61wyf1fz6nz4m0cc6cvd2zcnl2i1jov6hlh2otxdnllgjyndip4n6t0dfk32rhqg2btjpkkwtsh9t4fc9651qtg6bwbfyymqyww4a3oet07ubdukhsrf9721wxge6rv4953w4jfay3wpa64hb47yq98t66tvtbc95orduk2awexpgl4thx4oy3pxuwralcz5dcujtwwjy7peq7uksdqiqwm3f6ffsq4m8mf9bkfbdkvzd6sx4vnkhym5ddgz32d7wd4e4y7lb2xich392vj8m08upqkk3fa9scm0a1th5qf3wswajr2yvnqdq2pouegkyebcxcbo8j9fxa4io8rj17sz03ye16mnthffqyp3bwfkd18cvhmqn6iio31tt5ocmtzmtr1tl1kgumv66r6hyt12yi9pvc7588iqp12l34cs09z4m65r6ajc67zlh2zzmljg5cnctxbk8y3lki0b36d4ve0j98jejzjc0k83j82ju8tmgx4lyen1tfzrdjy8zarhd3rclljb36zyr1tq3ypbhs4km6t510i2zacryor9uch0otigwkc6yydoib8to0n5kq31plemft7wu9e7384wxpn97jumvqs13a7rw7audih5o4twoulb7z38ezqxv5mwt0f5g19b0k42maiyiw1g28i0b3nrk73hcitprykiteerylukinmh58iiqgjb35rgyfe57416ii0fxcdol19qm4ftt9600bhf8znbnjcny51sa38pc5958obtuzbfutyk8fdc2azyotzll09nzarmbdvtga5ecvp5vx9mro0e2zn7jxbgf2wewum642abq24dz2cbxos4q3gvw64b8trarw8ih3v6cpkr86rim0c4j38g0hkt4ht1w9qbqkz96npfbz9ozcypn8qki1tyfntcxiv3unchyzz44pllgl12ssqnvf6jliviqil70w6znbbh2wfd0vhibwzgukohs691mjvpw7w25y8kpi3b3p565tjwyblgkvm0yrcobv8viwtzenec8x34iw8qaylxl8d2ygq4gmda7o4d0ep3sflv99d59u9hheal6ao3uexjwi9x2hst7258ifkv387qv278eupgienajsc5v85634173lldn95 == \7\d\b\6\m\p\s\i\w\h\b\0\v\0\b\u\z\p\r\9\e\t\d\n\n\y\c\x\i\k\9\r\t\0\t\2\e\u\3\3\6\h\6\3\s\l\o\o\v\4\1\e\x\b\5\a\m\z\e\t\g\8\2\3\3\l\7\i\e\v\4\r\r\3\n\a\y\5\v\r\2\7\t\1\v\9\o\4\j\w\d\x\u\w\f\u\g\t\m\m\q\n\e\x\o\f\i\6\7\t\g\b\6\o\a\k\j\z\s\7\s\h\6\i\a\3\z\n\u\y\o\b\4\h\9\9\s\3\r\a\4\y\q\7\z\v\a\p\0\b\h\9\9\o\8\7\p\z\9\k\1\t\r\0\2\g\a\9\x\j\6\4\5\c\l\3\g\r\l\t\h\h\6\y\s\9\e\5\m\8\z\3\6\r\7\d\0\e\4\3\5\c\i\6\0\e\h\d\f\2\w\l\x\y\n\3\d\m\n\z\z\s\9\c\q\i\l\r\6\l\3\x\1\h\s\b\j\g\9\7\8\2\h\t\l\5\l\h\g\2\h\w\r\r\d\9\h\7\w\5\6\t\k\j\4\v\c\k\l\c\4\d\z\x\p\g\1\3\z\l\d\w\u\r\l\v\m\h\b\h\k\q\6\0\d\t\z\f\m\4\d\6\d\4\v\2\s\l\6\f\w\m\p\z\7\7\g\m\n\d\3\t\2\q\s\t\z\c\6\1\m\j\l\q\u\f\s\3\o\y\8\i\k\8\i\q\g\s\o\4\4\u\5\x\w\b\v\1\o\m\o\b\j\i\m\6\u\f\p\i\z\8\6\6\o\s\8\6\4\q\c\q\b\q\k\1\h\m\c\d\u\r\n\c\6\4\i\r\n\2\6\l\o\z\y\4\9\q\5\r\k\p\7\f\8\e\6\l\6\5\d\t\7\k\h\4\g\9\c\k\h\p\c\g\s\3\p\2\s\p\i\9\s\b\x\n\3\k\s\2\8\5\z\9\4\h\1\w\6\8\f\l\x\k\z\q\t\a\0\9\m\2\3\i\3\d\d\c\x\8\1\s\f\i\5\f\0\w\s\3\3\6\7\h\2\n\6\t\3\5\6\p\8\w\l\d\p\b\a\c\y\j\u\7\j\3\m\r\h\r\p\x\d\v\e\i\j\q\n\j\s\y\h\2\2\g\n\w\b\5\j\n\7\o\t\r\r\t\z\n\z\g\w\9\5\o\1\a\h\2\1\1\y\h\8\h\9\9\9\c\l\r\x\i\e\g\t\o\t\9\y\6\7\r\0\x\r\9\y\n\6\8\r\k\k\f\d\7\b\z\3\6\6\u\e\e\3\s\g\b\l\j\k\w\9\t\w\b\6\4\7\6\9\n\b\7\1\k\g\e\g\h\0\j\o\i\8\f\q\9\l\1\8\1\c\w\3\t\a\f\y\m\m\c\6\o\m\4\5\z\z\p\d\o\p\2\t\0\z\i\o\r\6\7\p\1\f\e\e\f\r\n\j\c\f\w\e\y\r\o\h\q\u\j\v\p\k\o\z\9\l\4\5\f\1\3\5\o\i\5\6\k\d\6\7\v\v\u\f\0\p\y\l\a\b\d\n\i\d\0\i\0\r\4\1\i\e\a\v\x\p\h\k\2\z\p\d\b\a\m\4\n\z\h\v\c\w\z\j\h\7\4\d\6\y\2\d\9\u\h\w\l\b\3\t\l\p\f\4\d\g\p\0\d\5\n\n\9\t\8\4\k\u\w\f\5\x\1\z\c\1\h\n\5\2\s\p\y\w\e\c\i\c\1\z\c\z\j\5\h\g\q\0\p\r\m\t\c\b\j\y\m\k\e\g\l\x\2\p\l\0\f\y\0\h\9\z\0\t\z\3\t\q\1\p\q\w\q\u\o\l\w\9\h\f\r\z\j\f\f\f\o\j\8\t\6\j\s\7\8\f\s\6\7\m\x\l\d\3\3\g\9\8\6\m\x\o\x\s\g\d\d\d\w\9\2\7\a\g\m\1\0\d\v\q\l\t\p\p\7\q\i\z\l\9\c\0\g\1\o\1\9\f\8\f\s\p\a\m\o\q\q\s\0\1\e\x\o\2\e\i\4\n\s\g\3\m\v\3\l\g\3\2\w\k\f\a\8\s\0\2\1\2\s\a\g\s\c\a\4\3\h\3\i\w\1\c\k\g\w\l\3\u\4\9\y\0\e\u\1\p\n\z\d\4\f\u\w\3\c\w\2\8\z\d\o\f\j\c\5\f\6\n\y\1\q\j\b\y\x\9\x\v\6\6\m\u\v\z\0\2\y\z\m\1\8\e\h\1\i\9\t\u\5\m\c\u\v\u\o\f\s\x\a\5\z\6\4\1\5\f\u\o\s\i\r\n\8\n\5\z\3\f\o\a\1\2\g\t\t\t\m\z\6\3\x\9\8\n\l\z\r\t\t\x\i\1\s\r\x\c\i\2\i\5\e\k\k\v\e\h\z\o\4\i\r\k\l\b\x\r\e\6\k\t\y\8\c\b\s\y\f\i\9\c\x\6\j\p\y\o\3\v\1\z\f\d\v\l\y\u\k\r\8\h\d\p\u\d\m\b\7\c\t\9\z\n\t\6\8\g\v\g\8\d\0\k\q\e\5\4\1\5\o\4\p\0\j\5\d\f\b\z\0\k\c\f\y\7\6\3\1\u\1\j\1\4\i\o\w\t\c\y\j\k\a\5\n\8\d\i\q\1\z\0\j\k\y\y\3\l\m\0\x\s\y\9\g\i\r\h\y\4\g\l\3\o\q\8\k\y\c\1\t\t\e\o\z\4\4\g\m\o\e\t\j\d\w\m\4\e\z\o\k\z\o\k\p\0\f\1\x\z\g\r\u\w\6\z\g\i\p\v\v\1\i\r\d\r\2\g\i\9\p\y\o\e\r\r\r\z\r\2\s\i\6\6\e\t\k\3\2\u\s\m\z\k\r\z\w\n\i\d\7\n\0\p\f\d\y\m\u\g\d\b\e\p\g\c\u\w\s\b\k\b\q\k\h\u\6\9\u\s\z\1\f\q\3\k\i\r\8\7\r\5\3\8\6\g\p\3\7\6\c\8\8\y\f\h\p\8\g\s\8\3\q\o\x\v\z\6\7\w\j\o\d\7\p\2\2\1\u\n\f\k\7\g\g\w\y\i\b\w\h\f\h\h\u\j\d\6\w\r\n\0\p\g\h\4\0\s\z\o\5\3\u\8\b\j\5\v\l\0\o\w\w\a\m\9\o\l\2\i\0\w\0\w\x\f\n\7\2\1\c\z\o\v\l\u\c\c\w\u\4\s\k\e\o\y\6\t\j\r\0\z\0\m\f\9\c\n\f\a\6\t\7\x\h\j\j\x\d\3\5\7\t\y\8\i\c\4\i\4\n\0\w\t\n\d\2\q\g\e\5\l\6\s\n\6\k\2\h\r\i\2\y\3\e\o\1\h\b\z\2\9\e\3\0\u\q\z\h\e\a\1\f\j\m\v\x\k\l\k\r\8\s\k\0\6\e\k\p\5\b\m\u\i\q\o\3\p\g\5\1\e\7\i\q\z\d\2\5\e\o\n\y\h\6\e\d\w\q\2\r\v\5\u\a\a\7\1\c\9\d\c\e\0\s\e\n\m\j\i\n\t\h\7\4\9\5\m\7\f\n\1\7\5\p\x\s\m\c\w\7\d\7\z\i\a\i\b\m\6\s\8\q\u\a\0\9\c\f\5\r\o\2\v\e\5\d\f\0\n\4\7\i\1\h\p\4\3\7\w\e\s\j\u\s\l\f\9\x\f\x\o\g\i\4\7\x\a\k\x\p\n\t\j\m\i\v\d\l\2\i\j\y\2\7\4\k\l\p\l\q\e\4\b\b\r\n\b\w\9\z\0\e\j\i\8\m\u\7\4\p\m\6\j\1\3\w\v\s\o\d\r\h\e\o\y\u\u\9\d\r\o\c\s\w\f\l\l\y\y\0\q\l\w\s\z\8\3\o\q\k\d\s\a\t\c\k\1\p\2\6\n\j\3\4\d\2\8\2\w\7\f\9\a\f\q\n\y\h\o\g\l\w\h\g\d\v\4\z\2\8\x\o\1\1\6\p\c\i\r\y\i\0\5\d\6\n\y\t\y\3\9\l\g\8\9\j\7\n\l\2\l\f\p\5\t\n\r\z\2\a\v\9\s\z\r\2\f\g\r\i\a\6\u\k\7\z\k\t\e\y\b\9\7\3\k\p\k\o\f\m\l\2\a\a\h\i\d\6\v\q\z\q\4\t\o\n\a\5\w\9\5\t\g\s\e\p\1\p\m\h\h\l\9\p\y\j\j\c\y\n\i\x\i\e\d\i\g\u\l\0\9\l\6\0\p\4\h\g\f\g\t\l\s\1\p\t\v\p\l\d\s\9\5\o\r\8\5\7\s\m\4\f\y\b\g\m\8\h\8\h\i\e\7\q\b\q\x\t\h\n\l\i\t\c\2\8\0\5\8\1\r\v\s\7\4\n\b\z\c\e\l\r\x\9\w\o\8\v\y\3\l\d\h\k\m\m\k\3\p\k\m\8\k\r\b\g\p\4\0\y\q\6\e\o\z\h\q\h\t\7\3\w\z\3\b\3\e\k\x\z\s\i\d\s\w\q\l\3\2\8\z\c\b\d\w\v\q\0\c\i\2\b\g\s\m\y\l\q\1\d\b\h\8\5\3\0\c\y\k\6\g\e\l\y\9\y\5\j\m\x\7\y\z\h\j\f\n\3\t\d\p\8\w\u\z\6\9\u\y\2\p\9\m\b\l\g\v\7\2\n\l\c\k\g\2\5\n\7\u\7\u\8\0\f\6\u\f\d\g\p\v\i\b\p\s\z\1\c\i\o\5\3\j\u\n\y\3\d\b\1\x\u\9\z\4\2\l\6\7\f\f\0\3\y\v\3\j\k\h\1\p\c\9\k\v\t\e\h\8\2\2\j\b\b\e\v\2\n\g\n\k\w\9\1\g\x\p\f\z\7\p\w\0\1\s\1\3\h\e\k\6\r\o\q\8\q\5\f\7\u\3\h\k\a\6\8\o\2\r\w\w\n\l\o\0\a\n\b\b\y\9\f\l\6\n\7\0\1\d\i\n\l\x\7\d\u\n\g\p\v\h\s\d\5\g\2\o\9\0\t\b\l\1\4\x\t\t\f\h\6\j\s\c\w\o\a\m\u\s\v\8\3\5\s\5\z\9\7\m\k\s\o\r\d\5\f\u\k\z\l\l\i\6\d\d\9\j\q\f\l\m\r\1\4\m\1\6\9\o\j\6\7\5\6\l\c\2\4\6\h\q\h\h\0\a\q\f\g\y\s\t\k\b\u\7\e\o\z\e\4\r\i\g\t\l\v\0\k\d\k\y\b\6\i\l\b\i\l\j\c\w\l\t\a\s\y\7\m\t\2\m\a\7\9\7\5\m\a\n\s\d\z\m\0\1\8\u\f\s\h\w\t\0\u\4\o\b\c\u\m\l\o\4\h\s\l\h\q\u\i\4\j\b\5\0\a\6\1\9\w\n\b\p\a\h\u\g\1\i\i\3\g\5\y\t\s\m\4\g\u\q\n\n\3\i\4\i\e\a\l\0\u\m\r\2\v\w\k\3\c\f\e\l\v\v\f\y\e\2\e\6\t\d\u\a\9\6\e\f\t\2\e\a\9\e\m\i\2\o\k\a\c\z\y\g\u\5\t\r\m\5\6\y\5\1\8\f\9\s\4\z\n\o\b\i\0\x\g\r\1\t\9\e\w\w\w\x\5\l\o\3\a\0\7\b\r\k\v\n\i\l\h\a\d\b\8\p\c\s\q\y\7\t\z\y\a\e\8\o\w\4\j\r\e\6\f\r\7\q\u\b\n\i\d\y\h\0\p\a\y\g\f\h\q\r\h\x\m\l\x\a\t\g\7\e\g\3\h\l\e\n\8\j\o\x\y\x\b\v\c\d\4\w\k\o\2\a\x\5\1\7\i\p\n\4\h\4\n\r\3\3\h\i\y\c\3\5\d\i\f\h\t\u\f\b\k\o\w\z\h\n\v\5\i\t\5\w\k\e\c\f\i\u\9\1\x\f\g\f\5\q\v\f\9\n\n\8\8\g\h\0\u\h\6\2\3\4\7\e\8\b\y\8\b\o\l\m\0\0\1\e\t\3\g\i\r\5\5\b\y\g\5\h\o\n\2\h\t\n\h\p\s\z\y\y\2\g\g\0\o\3\c\g\n\s\g\h\m\9\4\m\g\y\j\7\r\f\g\u\z\c\1\x\k\4\4\f\j\y\i\h\e\t\i\9\1\a\k\e\3\s\2\x\y\0\u\1\s\z\w\v\s\j\i\n\2\v\2\7\h\v\d\p\p\0\d\h\0\l\z\p\w\p\q\h\a\n\q\p\q\w\p\h\i\6\4\c\b\l\n\z\1\m\e\t\0\o\m\g\h\e\s\b\1\2\t\j\z\e\0\c\5\g\k\2\u\8\n\5\a\6\j\i\c\q\a\k\9\m\7\f\i\f\o\w\2\o\s\n\7\1\l\x\h\i\d\r\1\u\z\1\7\n\6\1\w\y\f\1\f\z\6\n\z\4\m\0\c\c\6\c\v\d\2\z\c\n\l\2\i\1\j\o\v\6\h\l\h\2\o\t\x\d\n\l\l\g\j\y\n\d\i\p\4\n\6\t\0\d\f\k\3\2\r\h\q\g\2\b\t\j\p\k\k\w\t\s\h\9\t\4\f\c\9\6\5\1\q\t\g\6\b\w\b\f\y\y\m\q\y\w\w\4\a\3\o\e\t\0\7\u\b\d\u\k\h\s\r\f\9\7\2\1\w\x\g\e\6\r\v\4\9\5\3\w\4\j\f\a\y\3\w\p\a\6\4\h\b\4\7\y\q\9\8\t\6\6\t\v\t\b\c\9\5\o\r\d\u\k\2\a\w\e\x\p\g\l\4\t\h\x\4\o\y\3\p\x\u\w\r\a\l\c\z\5\d\c\u\j\t\w\w\j\y\7\p\e\q\7\u\k\s\d\q\i\q\w\m\3\f\6\f\f\s\q\4\m\8\m\f\9\b\k\f\b\d\k\v\z\d\6\s\x\4\v\n\k\h\y\m\5\d\d\g\z\3\2\d\7\w\d\4\e\4\y\7\l\b\2\x\i\c\h\3\9\2\v\j\8\m\0\8\u\p\q\k\k\3\f\a\9\s\c\m\0\a\1\t\h\5\q\f\3\w\s\w\a\j\r\2\y\v\n\q\d\q\2\p\o\u\e\g\k\y\e\b\c\x\c\b\o\8\j\9\f\x\a\4\i\o\8\r\j\1\7\s\z\0\3\y\e\1\6\m\n\t\h\f\f\q\y\p\3\b\w\f\k\d\1\8\c\v\h\m\q\n\6\i\i\o\3\1\t\t\5\o\c\m\t\z\m\t\r\1\t\l\1\k\g\u\m\v\6\6\r\6\h\y\t\1\2\y\i\9\p\v\c\7\5\8\8\i\q\p\1\2\l\3\4\c\s\0\9\z\4\m\6\5\r\6\a\j\c\6\7\z\l\h\2\z\z\m\l\j\g\5\c\n\c\t\x\b\k\8\y\3\l\k\i\0\b\3\6\d\4\v\e\0\j\9\8\j\e\j\z\j\c\0\k\8\3\j\8\2\j\u\8\t\m\g\x\4\l\y\e\n\1\t\f\z\r\d\j\y\8\z\a\r\h\d\3\r\c\l\l\j\b\3\6\z\y\r\1\t\q\3\y\p\b\h\s\4\k\m\6\t\5\1\0\i\2\z\a\c\r\y\o\r\9\u\c\h\0\o\t\i\g\w\k\c\6\y\y\d\o\i\b\8\t\o\0\n\5\k\q\3\1\p\l\e\m\f\t\7\w\u\9\e\7\3\8\4\w\x\p\n\9\7\j\u\m\v\q\s\1\3\a\7\r\w\7\a\u\d\i\h\5\o\4\t\w\o\u\l\b\7\z\3\8\e\z\q\x\v\5\m\w\t\0\f\5\g\1\9\b\0\k\4\2\m\a\i\y\i\w\1\g\2\8\i\0\b\3\n\r\k\7\3\h\c\i\t\p\r\y\k\i\t\e\e\r\y\l\u\k\i\n\m\h\5\8\i\i\q\g\j\b\3\5\r\g\y\f\e\5\7\4\1\6\i\i\0\f\x\c\d\o\l\1\9\q\m\4\f\t\t\9\6\0\0\b\h\f\8\z\n\b\n\j\c\n\y\5\1\s\a\3\8\p\c\5\9\5\8\o\b\t\u\z\b\f\u\t\y\k\8\f\d\c\2\a\z\y\o\t\z\l\l\0\9\n\z\a\r\m\b\d\v\t\g\a\5\e\c\v\p\5\v\x\9\m\r\o\0\e\2\z\n\7\j\x\b\g\f\2\w\e\w\u\m\6\4\2\a\b\q\2\4\d\z\2\c\b\x\o\s\4\q\3\g\v\w\6\4\b\8\t\r\a\r\w\8\i\h\3\v\6\c\p\k\r\8\6\r\i\m\0\c\4\j\3\8\g\0\h\k\t\4\h\t\1\w\9\q\b\q\k\z\9\6\n\p\f\b\z\9\o\z\c\y\p\n\8\q\k\i\1\t\y\f\n\t\c\x\i\v\3\u\n\c\h\y\z\z\4\4\p\l\l\g\l\1\2\s\s\q\n\v\f\6\j\l\i\v\i\q\i\l\7\0\w\6\z\n\b\b\h\2\w\f\d\0\v\h\i\b\w\z\g\u\k\o\h\s\6\9\1\m\j\v\p\w\7\w\2\5\y\8\k\p\i\3\b\3\p\5\6\5\t\j\w\y\b\l\g\k\v\m\0\y\r\c\o\b\v\8\v\i\w\t\z\e\n\e\c\8\x\3\4\i\w\8\q\a\y\l\x\l\8\d\2\y\g\q\4\g\m\d\a\7\o\4\d\0\e\p\3\s\f\l\v\9\9\d\5\9\u\9\h\h\e\a\l\6\a\o\3\u\e\x\j\w\i\9\x\2\h\s\t\7\2\5\8\i\f\k\v\3\8\7\q\v\2\7\8\e\u\p\g\i\e\n\a\j\s\c\5\v\8\5\6\3\4\1\7\3\l\l\d\n\9\5 ]] 00:08:06.295 00:08:06.295 real 0m1.540s 00:08:06.295 user 0m1.068s 00:08:06.295 sys 0m0.758s 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:06.295 08:41:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.553 { 00:08:06.553 "subsystems": [ 00:08:06.553 { 00:08:06.553 "subsystem": "bdev", 00:08:06.553 "config": [ 00:08:06.553 { 00:08:06.553 "params": { 00:08:06.553 "trtype": "pcie", 00:08:06.553 "traddr": "0000:00:10.0", 00:08:06.553 "name": "Nvme0" 00:08:06.553 }, 00:08:06.553 "method": "bdev_nvme_attach_controller" 00:08:06.553 }, 00:08:06.553 { 00:08:06.553 "method": "bdev_wait_for_examine" 00:08:06.553 } 00:08:06.553 ] 00:08:06.553 } 00:08:06.553 ] 00:08:06.553 } 00:08:06.553 [2024-11-20 08:41:37.246987] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:06.553 [2024-11-20 08:41:37.247325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60202 ] 00:08:06.553 [2024-11-20 08:41:37.393123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.811 [2024-11-20 08:41:37.473055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.811 [2024-11-20 08:41:37.544715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.811  [2024-11-20T08:41:37.984Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:07.069 00:08:07.069 08:41:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.069 ************************************ 00:08:07.069 END TEST spdk_dd_basic_rw 00:08:07.069 ************************************ 00:08:07.069 00:08:07.069 real 0m20.660s 00:08:07.069 user 0m14.835s 00:08:07.069 sys 0m8.350s 00:08:07.070 08:41:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.070 08:41:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.070 08:41:37 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:07.070 08:41:37 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.070 08:41:37 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.070 08:41:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:07.070 ************************************ 00:08:07.070 START TEST spdk_dd_posix 00:08:07.070 ************************************ 00:08:07.070 08:41:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:07.328 * Looking for test storage... 00:08:07.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:07.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.328 --rc genhtml_branch_coverage=1 00:08:07.328 --rc genhtml_function_coverage=1 00:08:07.328 --rc genhtml_legend=1 00:08:07.328 --rc geninfo_all_blocks=1 00:08:07.328 --rc geninfo_unexecuted_blocks=1 00:08:07.328 00:08:07.328 ' 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:07.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.328 --rc genhtml_branch_coverage=1 00:08:07.328 --rc genhtml_function_coverage=1 00:08:07.328 --rc genhtml_legend=1 00:08:07.328 --rc geninfo_all_blocks=1 00:08:07.328 --rc geninfo_unexecuted_blocks=1 00:08:07.328 00:08:07.328 ' 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:07.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.328 --rc genhtml_branch_coverage=1 00:08:07.328 --rc genhtml_function_coverage=1 00:08:07.328 --rc genhtml_legend=1 00:08:07.328 --rc geninfo_all_blocks=1 00:08:07.328 --rc geninfo_unexecuted_blocks=1 00:08:07.328 00:08:07.328 ' 00:08:07.328 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:07.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.328 --rc genhtml_branch_coverage=1 00:08:07.328 --rc genhtml_function_coverage=1 00:08:07.328 --rc genhtml_legend=1 00:08:07.328 --rc geninfo_all_blocks=1 00:08:07.328 --rc geninfo_unexecuted_blocks=1 00:08:07.328 00:08:07.328 ' 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:07.329 * First test run, liburing in use 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:07.329 ************************************ 00:08:07.329 START TEST dd_flag_append 00:08:07.329 ************************************ 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=m8o550gwyhdpqj14cgt2fo2cuoq1g0td 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=t3g60v7nb7gi9b8tl6uc0pmiktztamak 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s m8o550gwyhdpqj14cgt2fo2cuoq1g0td 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s t3g60v7nb7gi9b8tl6uc0pmiktztamak 00:08:07.329 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:07.329 [2024-11-20 08:41:38.231963] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:07.329 [2024-11-20 08:41:38.232333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60274 ] 00:08:07.587 [2024-11-20 08:41:38.382602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.587 [2024-11-20 08:41:38.468243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.845 [2024-11-20 08:41:38.543147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.845  [2024-11-20T08:41:39.018Z] Copying: 32/32 [B] (average 31 kBps) 00:08:08.103 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ t3g60v7nb7gi9b8tl6uc0pmiktztamakm8o550gwyhdpqj14cgt2fo2cuoq1g0td == \t\3\g\6\0\v\7\n\b\7\g\i\9\b\8\t\l\6\u\c\0\p\m\i\k\t\z\t\a\m\a\k\m\8\o\5\5\0\g\w\y\h\d\p\q\j\1\4\c\g\t\2\f\o\2\c\u\o\q\1\g\0\t\d ]] 00:08:08.103 00:08:08.103 real 0m0.670s 00:08:08.103 user 0m0.395s 00:08:08.103 sys 0m0.343s 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.103 ************************************ 00:08:08.103 END TEST dd_flag_append 00:08:08.103 ************************************ 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:08.103 ************************************ 00:08:08.103 START TEST dd_flag_directory 00:08:08.103 ************************************ 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.103 08:41:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.103 [2024-11-20 08:41:38.950499] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:08.103 [2024-11-20 08:41:38.950875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60308 ] 00:08:08.362 [2024-11-20 08:41:39.095562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.362 [2024-11-20 08:41:39.174489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.362 [2024-11-20 08:41:39.246092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.620 [2024-11-20 08:41:39.295082] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:08.620 [2024-11-20 08:41:39.295152] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:08.620 [2024-11-20 08:41:39.295173] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.620 [2024-11-20 08:41:39.460141] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.878 08:41:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:08.878 [2024-11-20 08:41:39.602965] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:08.878 [2024-11-20 08:41:39.603075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60317 ] 00:08:08.878 [2024-11-20 08:41:39.744746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.138 [2024-11-20 08:41:39.823519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.138 [2024-11-20 08:41:39.894911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.138 [2024-11-20 08:41:39.944324] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:09.138 [2024-11-20 08:41:39.944390] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:09.138 [2024-11-20 08:41:39.944412] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.448 [2024-11-20 08:41:40.111443] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:09.448 ************************************ 00:08:09.448 END TEST dd_flag_directory 00:08:09.448 ************************************ 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.448 00:08:09.448 real 0m1.297s 00:08:09.448 user 0m0.747s 00:08:09.448 sys 0m0.336s 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:09.448 ************************************ 00:08:09.448 START TEST dd_flag_nofollow 00:08:09.448 ************************************ 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:08:09.448 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:09.449 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.449 [2024-11-20 08:41:40.322781] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:09.449 [2024-11-20 08:41:40.322905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60346 ] 00:08:09.708 [2024-11-20 08:41:40.467665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.708 [2024-11-20 08:41:40.546230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.968 [2024-11-20 08:41:40.622577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.968 [2024-11-20 08:41:40.672190] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:09.968 [2024-11-20 08:41:40.672270] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:09.968 [2024-11-20 08:41:40.672294] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.968 [2024-11-20 08:41:40.835066] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.228 08:41:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:10.228 [2024-11-20 08:41:40.978953] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:10.228 [2024-11-20 08:41:40.979338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60355 ] 00:08:10.228 [2024-11-20 08:41:41.124227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.487 [2024-11-20 08:41:41.202369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.487 [2024-11-20 08:41:41.272445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.487 [2024-11-20 08:41:41.321064] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:10.487 [2024-11-20 08:41:41.321132] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:10.487 [2024-11-20 08:41:41.321155] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.747 [2024-11-20 08:41:41.493998] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:10.747 08:41:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:10.747 08:41:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:10.747 08:41:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:10.747 08:41:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:10.747 08:41:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:10.747 08:41:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:10.747 08:41:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:10.747 08:41:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:10.747 08:41:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:10.747 08:41:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:10.747 [2024-11-20 08:41:41.641643] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:10.747 [2024-11-20 08:41:41.641768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60363 ] 00:08:11.007 [2024-11-20 08:41:41.789102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.007 [2024-11-20 08:41:41.868211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.267 [2024-11-20 08:41:41.939951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.267  [2024-11-20T08:41:42.442Z] Copying: 512/512 [B] (average 500 kBps) 00:08:11.527 00:08:11.527 ************************************ 00:08:11.527 END TEST dd_flag_nofollow 00:08:11.527 ************************************ 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ gp39v6afafm8lqchx2aeoqatlkep0ll6jfpkhvej82dmkadys74046pradeaueu1hlkguyj6hoqpt7ud3li0cm2vy80xhixb0rxavg400zuxy4ilst5reqehgtrqyszg8i687jme9pnic2dac52ylu3tibfp5lmzltz43eh9dm951b87gepzvipa0nsg0drdhjs4hudawgdf317sr6bd4d5w2j51sl4rqkks8xgz5s3mha1dhsgse9fshan1to0bn556r6e75nbq6eedlc2gdmnjylav6fx5wko1nqmqmpon0uy5e08tgv3lirdk58tu9mvy1qedpybq76l4f32mwd6my7isxf20pxl0zh92pe8sktdcccxc4sexvf72oj2941gjgbdsf3o5nc6j8evspiyjb73qwfco4vhwiswfcuw6zd1g15itady58kogcrb1rpwnq9ui0nov93dcodmg2pwhpqduzd836y19bjzh24o86x6xnfmkcu5cgpkf3uf6 == \g\p\3\9\v\6\a\f\a\f\m\8\l\q\c\h\x\2\a\e\o\q\a\t\l\k\e\p\0\l\l\6\j\f\p\k\h\v\e\j\8\2\d\m\k\a\d\y\s\7\4\0\4\6\p\r\a\d\e\a\u\e\u\1\h\l\k\g\u\y\j\6\h\o\q\p\t\7\u\d\3\l\i\0\c\m\2\v\y\8\0\x\h\i\x\b\0\r\x\a\v\g\4\0\0\z\u\x\y\4\i\l\s\t\5\r\e\q\e\h\g\t\r\q\y\s\z\g\8\i\6\8\7\j\m\e\9\p\n\i\c\2\d\a\c\5\2\y\l\u\3\t\i\b\f\p\5\l\m\z\l\t\z\4\3\e\h\9\d\m\9\5\1\b\8\7\g\e\p\z\v\i\p\a\0\n\s\g\0\d\r\d\h\j\s\4\h\u\d\a\w\g\d\f\3\1\7\s\r\6\b\d\4\d\5\w\2\j\5\1\s\l\4\r\q\k\k\s\8\x\g\z\5\s\3\m\h\a\1\d\h\s\g\s\e\9\f\s\h\a\n\1\t\o\0\b\n\5\5\6\r\6\e\7\5\n\b\q\6\e\e\d\l\c\2\g\d\m\n\j\y\l\a\v\6\f\x\5\w\k\o\1\n\q\m\q\m\p\o\n\0\u\y\5\e\0\8\t\g\v\3\l\i\r\d\k\5\8\t\u\9\m\v\y\1\q\e\d\p\y\b\q\7\6\l\4\f\3\2\m\w\d\6\m\y\7\i\s\x\f\2\0\p\x\l\0\z\h\9\2\p\e\8\s\k\t\d\c\c\c\x\c\4\s\e\x\v\f\7\2\o\j\2\9\4\1\g\j\g\b\d\s\f\3\o\5\n\c\6\j\8\e\v\s\p\i\y\j\b\7\3\q\w\f\c\o\4\v\h\w\i\s\w\f\c\u\w\6\z\d\1\g\1\5\i\t\a\d\y\5\8\k\o\g\c\r\b\1\r\p\w\n\q\9\u\i\0\n\o\v\9\3\d\c\o\d\m\g\2\p\w\h\p\q\d\u\z\d\8\3\6\y\1\9\b\j\z\h\2\4\o\8\6\x\6\x\n\f\m\k\c\u\5\c\g\p\k\f\3\u\f\6 ]] 00:08:11.527 00:08:11.527 real 0m1.998s 00:08:11.527 user 0m1.149s 00:08:11.527 sys 0m0.704s 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:11.527 ************************************ 00:08:11.527 START TEST dd_flag_noatime 00:08:11.527 ************************************ 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732092101 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732092102 00:08:11.527 08:41:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:12.467 08:41:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.467 [2024-11-20 08:41:43.373947] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:12.467 [2024-11-20 08:41:43.374101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60411 ] 00:08:12.727 [2024-11-20 08:41:43.524311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.727 [2024-11-20 08:41:43.604202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.987 [2024-11-20 08:41:43.675808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.987  [2024-11-20T08:41:44.161Z] Copying: 512/512 [B] (average 500 kBps) 00:08:13.246 00:08:13.246 08:41:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.246 08:41:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732092101 )) 00:08:13.246 08:41:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.246 08:41:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732092102 )) 00:08:13.246 08:41:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.246 [2024-11-20 08:41:44.056017] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:13.247 [2024-11-20 08:41:44.056208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60424 ] 00:08:13.505 [2024-11-20 08:41:44.212614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.505 [2024-11-20 08:41:44.303195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.505 [2024-11-20 08:41:44.379579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.842  [2024-11-20T08:41:44.757Z] Copying: 512/512 [B] (average 500 kBps) 00:08:13.842 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732092104 )) 00:08:13.842 00:08:13.842 real 0m2.406s 00:08:13.842 user 0m0.809s 00:08:13.842 sys 0m0.735s 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:13.842 ************************************ 00:08:13.842 END TEST dd_flag_noatime 00:08:13.842 ************************************ 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:13.842 ************************************ 00:08:13.842 START TEST dd_flags_misc 00:08:13.842 ************************************ 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:13.842 08:41:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:14.102 [2024-11-20 08:41:44.794115] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:14.102 [2024-11-20 08:41:44.794472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60453 ] 00:08:14.102 [2024-11-20 08:41:44.937054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.361 [2024-11-20 08:41:45.020877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.361 [2024-11-20 08:41:45.092410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.361  [2024-11-20T08:41:45.535Z] Copying: 512/512 [B] (average 500 kBps) 00:08:14.620 00:08:14.620 08:41:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yg27q43jjvlmpo7yp2l2ryfuet7a6i6o478e02pb40dd1gn8pjzjzqjovdy30zhrutdajkoq9akswwbp062fjeog5tkj3sphrdcez8rmqeqbk2uq5jauqyqkf7jhst6unfemp7ihi4tldihvs1k0yd36flyekcd36tdf7bwhhz7zmihmmrysahb9ia9ohnajr7pdqh7xk3x8abeo8zblub8khjsc9cg2cn1qurt3pfmkqxhsigvrbet2i87xzpkxwq1snvsnrqj1dhhfhhjvtad6gr30b4vymybghju7svjg9vw9b4u8ppeq33f6tvd9hlnfywup4ygd466v5a2ul2rzbalws94s1pomeryypxtti934wo8wdounqen9tu2cr8ponicwfeg94z4f1n0hkmx94umabr450cgy5lrykpvq7hjlg31o22c4cl9lydv51en89furd5ef8e67svu75a5v6yv4c62d4mq90loen3jqxfpggj1m6sqa5rhwgy6e == \y\g\2\7\q\4\3\j\j\v\l\m\p\o\7\y\p\2\l\2\r\y\f\u\e\t\7\a\6\i\6\o\4\7\8\e\0\2\p\b\4\0\d\d\1\g\n\8\p\j\z\j\z\q\j\o\v\d\y\3\0\z\h\r\u\t\d\a\j\k\o\q\9\a\k\s\w\w\b\p\0\6\2\f\j\e\o\g\5\t\k\j\3\s\p\h\r\d\c\e\z\8\r\m\q\e\q\b\k\2\u\q\5\j\a\u\q\y\q\k\f\7\j\h\s\t\6\u\n\f\e\m\p\7\i\h\i\4\t\l\d\i\h\v\s\1\k\0\y\d\3\6\f\l\y\e\k\c\d\3\6\t\d\f\7\b\w\h\h\z\7\z\m\i\h\m\m\r\y\s\a\h\b\9\i\a\9\o\h\n\a\j\r\7\p\d\q\h\7\x\k\3\x\8\a\b\e\o\8\z\b\l\u\b\8\k\h\j\s\c\9\c\g\2\c\n\1\q\u\r\t\3\p\f\m\k\q\x\h\s\i\g\v\r\b\e\t\2\i\8\7\x\z\p\k\x\w\q\1\s\n\v\s\n\r\q\j\1\d\h\h\f\h\h\j\v\t\a\d\6\g\r\3\0\b\4\v\y\m\y\b\g\h\j\u\7\s\v\j\g\9\v\w\9\b\4\u\8\p\p\e\q\3\3\f\6\t\v\d\9\h\l\n\f\y\w\u\p\4\y\g\d\4\6\6\v\5\a\2\u\l\2\r\z\b\a\l\w\s\9\4\s\1\p\o\m\e\r\y\y\p\x\t\t\i\9\3\4\w\o\8\w\d\o\u\n\q\e\n\9\t\u\2\c\r\8\p\o\n\i\c\w\f\e\g\9\4\z\4\f\1\n\0\h\k\m\x\9\4\u\m\a\b\r\4\5\0\c\g\y\5\l\r\y\k\p\v\q\7\h\j\l\g\3\1\o\2\2\c\4\c\l\9\l\y\d\v\5\1\e\n\8\9\f\u\r\d\5\e\f\8\e\6\7\s\v\u\7\5\a\5\v\6\y\v\4\c\6\2\d\4\m\q\9\0\l\o\e\n\3\j\q\x\f\p\g\g\j\1\m\6\s\q\a\5\r\h\w\g\y\6\e ]] 00:08:14.620 08:41:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:14.620 08:41:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:14.620 [2024-11-20 08:41:45.435367] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:14.620 [2024-11-20 08:41:45.435850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60468 ] 00:08:14.879 [2024-11-20 08:41:45.578330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.879 [2024-11-20 08:41:45.656521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.879 [2024-11-20 08:41:45.727140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.879  [2024-11-20T08:41:46.052Z] Copying: 512/512 [B] (average 500 kBps) 00:08:15.137 00:08:15.137 08:41:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yg27q43jjvlmpo7yp2l2ryfuet7a6i6o478e02pb40dd1gn8pjzjzqjovdy30zhrutdajkoq9akswwbp062fjeog5tkj3sphrdcez8rmqeqbk2uq5jauqyqkf7jhst6unfemp7ihi4tldihvs1k0yd36flyekcd36tdf7bwhhz7zmihmmrysahb9ia9ohnajr7pdqh7xk3x8abeo8zblub8khjsc9cg2cn1qurt3pfmkqxhsigvrbet2i87xzpkxwq1snvsnrqj1dhhfhhjvtad6gr30b4vymybghju7svjg9vw9b4u8ppeq33f6tvd9hlnfywup4ygd466v5a2ul2rzbalws94s1pomeryypxtti934wo8wdounqen9tu2cr8ponicwfeg94z4f1n0hkmx94umabr450cgy5lrykpvq7hjlg31o22c4cl9lydv51en89furd5ef8e67svu75a5v6yv4c62d4mq90loen3jqxfpggj1m6sqa5rhwgy6e == \y\g\2\7\q\4\3\j\j\v\l\m\p\o\7\y\p\2\l\2\r\y\f\u\e\t\7\a\6\i\6\o\4\7\8\e\0\2\p\b\4\0\d\d\1\g\n\8\p\j\z\j\z\q\j\o\v\d\y\3\0\z\h\r\u\t\d\a\j\k\o\q\9\a\k\s\w\w\b\p\0\6\2\f\j\e\o\g\5\t\k\j\3\s\p\h\r\d\c\e\z\8\r\m\q\e\q\b\k\2\u\q\5\j\a\u\q\y\q\k\f\7\j\h\s\t\6\u\n\f\e\m\p\7\i\h\i\4\t\l\d\i\h\v\s\1\k\0\y\d\3\6\f\l\y\e\k\c\d\3\6\t\d\f\7\b\w\h\h\z\7\z\m\i\h\m\m\r\y\s\a\h\b\9\i\a\9\o\h\n\a\j\r\7\p\d\q\h\7\x\k\3\x\8\a\b\e\o\8\z\b\l\u\b\8\k\h\j\s\c\9\c\g\2\c\n\1\q\u\r\t\3\p\f\m\k\q\x\h\s\i\g\v\r\b\e\t\2\i\8\7\x\z\p\k\x\w\q\1\s\n\v\s\n\r\q\j\1\d\h\h\f\h\h\j\v\t\a\d\6\g\r\3\0\b\4\v\y\m\y\b\g\h\j\u\7\s\v\j\g\9\v\w\9\b\4\u\8\p\p\e\q\3\3\f\6\t\v\d\9\h\l\n\f\y\w\u\p\4\y\g\d\4\6\6\v\5\a\2\u\l\2\r\z\b\a\l\w\s\9\4\s\1\p\o\m\e\r\y\y\p\x\t\t\i\9\3\4\w\o\8\w\d\o\u\n\q\e\n\9\t\u\2\c\r\8\p\o\n\i\c\w\f\e\g\9\4\z\4\f\1\n\0\h\k\m\x\9\4\u\m\a\b\r\4\5\0\c\g\y\5\l\r\y\k\p\v\q\7\h\j\l\g\3\1\o\2\2\c\4\c\l\9\l\y\d\v\5\1\e\n\8\9\f\u\r\d\5\e\f\8\e\6\7\s\v\u\7\5\a\5\v\6\y\v\4\c\6\2\d\4\m\q\9\0\l\o\e\n\3\j\q\x\f\p\g\g\j\1\m\6\s\q\a\5\r\h\w\g\y\6\e ]] 00:08:15.137 08:41:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:15.137 08:41:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:15.396 [2024-11-20 08:41:46.106968] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:15.396 [2024-11-20 08:41:46.107395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60478 ] 00:08:15.396 [2024-11-20 08:41:46.260245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.655 [2024-11-20 08:41:46.341036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.655 [2024-11-20 08:41:46.414129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.655  [2024-11-20T08:41:46.828Z] Copying: 512/512 [B] (average 125 kBps) 00:08:15.913 00:08:15.914 08:41:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yg27q43jjvlmpo7yp2l2ryfuet7a6i6o478e02pb40dd1gn8pjzjzqjovdy30zhrutdajkoq9akswwbp062fjeog5tkj3sphrdcez8rmqeqbk2uq5jauqyqkf7jhst6unfemp7ihi4tldihvs1k0yd36flyekcd36tdf7bwhhz7zmihmmrysahb9ia9ohnajr7pdqh7xk3x8abeo8zblub8khjsc9cg2cn1qurt3pfmkqxhsigvrbet2i87xzpkxwq1snvsnrqj1dhhfhhjvtad6gr30b4vymybghju7svjg9vw9b4u8ppeq33f6tvd9hlnfywup4ygd466v5a2ul2rzbalws94s1pomeryypxtti934wo8wdounqen9tu2cr8ponicwfeg94z4f1n0hkmx94umabr450cgy5lrykpvq7hjlg31o22c4cl9lydv51en89furd5ef8e67svu75a5v6yv4c62d4mq90loen3jqxfpggj1m6sqa5rhwgy6e == \y\g\2\7\q\4\3\j\j\v\l\m\p\o\7\y\p\2\l\2\r\y\f\u\e\t\7\a\6\i\6\o\4\7\8\e\0\2\p\b\4\0\d\d\1\g\n\8\p\j\z\j\z\q\j\o\v\d\y\3\0\z\h\r\u\t\d\a\j\k\o\q\9\a\k\s\w\w\b\p\0\6\2\f\j\e\o\g\5\t\k\j\3\s\p\h\r\d\c\e\z\8\r\m\q\e\q\b\k\2\u\q\5\j\a\u\q\y\q\k\f\7\j\h\s\t\6\u\n\f\e\m\p\7\i\h\i\4\t\l\d\i\h\v\s\1\k\0\y\d\3\6\f\l\y\e\k\c\d\3\6\t\d\f\7\b\w\h\h\z\7\z\m\i\h\m\m\r\y\s\a\h\b\9\i\a\9\o\h\n\a\j\r\7\p\d\q\h\7\x\k\3\x\8\a\b\e\o\8\z\b\l\u\b\8\k\h\j\s\c\9\c\g\2\c\n\1\q\u\r\t\3\p\f\m\k\q\x\h\s\i\g\v\r\b\e\t\2\i\8\7\x\z\p\k\x\w\q\1\s\n\v\s\n\r\q\j\1\d\h\h\f\h\h\j\v\t\a\d\6\g\r\3\0\b\4\v\y\m\y\b\g\h\j\u\7\s\v\j\g\9\v\w\9\b\4\u\8\p\p\e\q\3\3\f\6\t\v\d\9\h\l\n\f\y\w\u\p\4\y\g\d\4\6\6\v\5\a\2\u\l\2\r\z\b\a\l\w\s\9\4\s\1\p\o\m\e\r\y\y\p\x\t\t\i\9\3\4\w\o\8\w\d\o\u\n\q\e\n\9\t\u\2\c\r\8\p\o\n\i\c\w\f\e\g\9\4\z\4\f\1\n\0\h\k\m\x\9\4\u\m\a\b\r\4\5\0\c\g\y\5\l\r\y\k\p\v\q\7\h\j\l\g\3\1\o\2\2\c\4\c\l\9\l\y\d\v\5\1\e\n\8\9\f\u\r\d\5\e\f\8\e\6\7\s\v\u\7\5\a\5\v\6\y\v\4\c\6\2\d\4\m\q\9\0\l\o\e\n\3\j\q\x\f\p\g\g\j\1\m\6\s\q\a\5\r\h\w\g\y\6\e ]] 00:08:15.914 08:41:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:15.914 08:41:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:15.914 [2024-11-20 08:41:46.800915] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:15.914 [2024-11-20 08:41:46.801615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60487 ] 00:08:16.172 [2024-11-20 08:41:46.962112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.172 [2024-11-20 08:41:47.042020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.431 [2024-11-20 08:41:47.114063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.431  [2024-11-20T08:41:47.604Z] Copying: 512/512 [B] (average 250 kBps) 00:08:16.689 00:08:16.689 08:41:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ yg27q43jjvlmpo7yp2l2ryfuet7a6i6o478e02pb40dd1gn8pjzjzqjovdy30zhrutdajkoq9akswwbp062fjeog5tkj3sphrdcez8rmqeqbk2uq5jauqyqkf7jhst6unfemp7ihi4tldihvs1k0yd36flyekcd36tdf7bwhhz7zmihmmrysahb9ia9ohnajr7pdqh7xk3x8abeo8zblub8khjsc9cg2cn1qurt3pfmkqxhsigvrbet2i87xzpkxwq1snvsnrqj1dhhfhhjvtad6gr30b4vymybghju7svjg9vw9b4u8ppeq33f6tvd9hlnfywup4ygd466v5a2ul2rzbalws94s1pomeryypxtti934wo8wdounqen9tu2cr8ponicwfeg94z4f1n0hkmx94umabr450cgy5lrykpvq7hjlg31o22c4cl9lydv51en89furd5ef8e67svu75a5v6yv4c62d4mq90loen3jqxfpggj1m6sqa5rhwgy6e == \y\g\2\7\q\4\3\j\j\v\l\m\p\o\7\y\p\2\l\2\r\y\f\u\e\t\7\a\6\i\6\o\4\7\8\e\0\2\p\b\4\0\d\d\1\g\n\8\p\j\z\j\z\q\j\o\v\d\y\3\0\z\h\r\u\t\d\a\j\k\o\q\9\a\k\s\w\w\b\p\0\6\2\f\j\e\o\g\5\t\k\j\3\s\p\h\r\d\c\e\z\8\r\m\q\e\q\b\k\2\u\q\5\j\a\u\q\y\q\k\f\7\j\h\s\t\6\u\n\f\e\m\p\7\i\h\i\4\t\l\d\i\h\v\s\1\k\0\y\d\3\6\f\l\y\e\k\c\d\3\6\t\d\f\7\b\w\h\h\z\7\z\m\i\h\m\m\r\y\s\a\h\b\9\i\a\9\o\h\n\a\j\r\7\p\d\q\h\7\x\k\3\x\8\a\b\e\o\8\z\b\l\u\b\8\k\h\j\s\c\9\c\g\2\c\n\1\q\u\r\t\3\p\f\m\k\q\x\h\s\i\g\v\r\b\e\t\2\i\8\7\x\z\p\k\x\w\q\1\s\n\v\s\n\r\q\j\1\d\h\h\f\h\h\j\v\t\a\d\6\g\r\3\0\b\4\v\y\m\y\b\g\h\j\u\7\s\v\j\g\9\v\w\9\b\4\u\8\p\p\e\q\3\3\f\6\t\v\d\9\h\l\n\f\y\w\u\p\4\y\g\d\4\6\6\v\5\a\2\u\l\2\r\z\b\a\l\w\s\9\4\s\1\p\o\m\e\r\y\y\p\x\t\t\i\9\3\4\w\o\8\w\d\o\u\n\q\e\n\9\t\u\2\c\r\8\p\o\n\i\c\w\f\e\g\9\4\z\4\f\1\n\0\h\k\m\x\9\4\u\m\a\b\r\4\5\0\c\g\y\5\l\r\y\k\p\v\q\7\h\j\l\g\3\1\o\2\2\c\4\c\l\9\l\y\d\v\5\1\e\n\8\9\f\u\r\d\5\e\f\8\e\6\7\s\v\u\7\5\a\5\v\6\y\v\4\c\6\2\d\4\m\q\9\0\l\o\e\n\3\j\q\x\f\p\g\g\j\1\m\6\s\q\a\5\r\h\w\g\y\6\e ]] 00:08:16.689 08:41:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:16.689 08:41:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:16.689 08:41:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:16.689 08:41:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:16.690 08:41:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:16.690 08:41:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:16.690 [2024-11-20 08:41:47.474421] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:16.690 [2024-11-20 08:41:47.474563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60502 ] 00:08:16.947 [2024-11-20 08:41:47.625907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.947 [2024-11-20 08:41:47.705616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.947 [2024-11-20 08:41:47.776793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.947  [2024-11-20T08:41:48.121Z] Copying: 512/512 [B] (average 500 kBps) 00:08:17.206 00:08:17.206 08:41:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sfbm55rzmcbmia2ovxwc1x5pdhei0hb0myxczqewqn4awzgodc4c8j1icspg6vtw39mz96sbcvmnw1lylib6e6xs7zj9dj3mt9zcqo5vjcjjsi08irysqn01yicpb80qblzq4phgi5oge4qjh57irku5anvryz0r6s38tvizk09ojitsd7xkn5hv1a0dwbkwomo0aizw92dt1a0w1mtn2iy53cp61ic8r72k96raqki40a8t8vx8kx9aqxekvu4nm70zyiqhhcpgoixsczwr5v6b4bsukxaxm5sc46xt95k54xg7olacak24xj05970xuzq4bbeoipt275pwldknifg2nglgh66gyj3utsxfq9trk7vkf7tbnnvs7ql7acf9gtajp3z0g2vp0bzex9vwx07mczltt3nct8ww4s5mxg5dm5wknsdm5zyj5dxxxw2wumlqfz312qnxxwh7wo9hw0zdc69hbet24q03inazh576q2l5fhwkpzyx9dcb8q9q == \s\f\b\m\5\5\r\z\m\c\b\m\i\a\2\o\v\x\w\c\1\x\5\p\d\h\e\i\0\h\b\0\m\y\x\c\z\q\e\w\q\n\4\a\w\z\g\o\d\c\4\c\8\j\1\i\c\s\p\g\6\v\t\w\3\9\m\z\9\6\s\b\c\v\m\n\w\1\l\y\l\i\b\6\e\6\x\s\7\z\j\9\d\j\3\m\t\9\z\c\q\o\5\v\j\c\j\j\s\i\0\8\i\r\y\s\q\n\0\1\y\i\c\p\b\8\0\q\b\l\z\q\4\p\h\g\i\5\o\g\e\4\q\j\h\5\7\i\r\k\u\5\a\n\v\r\y\z\0\r\6\s\3\8\t\v\i\z\k\0\9\o\j\i\t\s\d\7\x\k\n\5\h\v\1\a\0\d\w\b\k\w\o\m\o\0\a\i\z\w\9\2\d\t\1\a\0\w\1\m\t\n\2\i\y\5\3\c\p\6\1\i\c\8\r\7\2\k\9\6\r\a\q\k\i\4\0\a\8\t\8\v\x\8\k\x\9\a\q\x\e\k\v\u\4\n\m\7\0\z\y\i\q\h\h\c\p\g\o\i\x\s\c\z\w\r\5\v\6\b\4\b\s\u\k\x\a\x\m\5\s\c\4\6\x\t\9\5\k\5\4\x\g\7\o\l\a\c\a\k\2\4\x\j\0\5\9\7\0\x\u\z\q\4\b\b\e\o\i\p\t\2\7\5\p\w\l\d\k\n\i\f\g\2\n\g\l\g\h\6\6\g\y\j\3\u\t\s\x\f\q\9\t\r\k\7\v\k\f\7\t\b\n\n\v\s\7\q\l\7\a\c\f\9\g\t\a\j\p\3\z\0\g\2\v\p\0\b\z\e\x\9\v\w\x\0\7\m\c\z\l\t\t\3\n\c\t\8\w\w\4\s\5\m\x\g\5\d\m\5\w\k\n\s\d\m\5\z\y\j\5\d\x\x\x\w\2\w\u\m\l\q\f\z\3\1\2\q\n\x\x\w\h\7\w\o\9\h\w\0\z\d\c\6\9\h\b\e\t\2\4\q\0\3\i\n\a\z\h\5\7\6\q\2\l\5\f\h\w\k\p\z\y\x\9\d\c\b\8\q\9\q ]] 00:08:17.206 08:41:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.206 08:41:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:17.464 [2024-11-20 08:41:48.128259] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:17.464 [2024-11-20 08:41:48.128390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60511 ] 00:08:17.464 [2024-11-20 08:41:48.273826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.464 [2024-11-20 08:41:48.372982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.722 [2024-11-20 08:41:48.449109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.722  [2024-11-20T08:41:48.896Z] Copying: 512/512 [B] (average 500 kBps) 00:08:17.981 00:08:17.981 08:41:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sfbm55rzmcbmia2ovxwc1x5pdhei0hb0myxczqewqn4awzgodc4c8j1icspg6vtw39mz96sbcvmnw1lylib6e6xs7zj9dj3mt9zcqo5vjcjjsi08irysqn01yicpb80qblzq4phgi5oge4qjh57irku5anvryz0r6s38tvizk09ojitsd7xkn5hv1a0dwbkwomo0aizw92dt1a0w1mtn2iy53cp61ic8r72k96raqki40a8t8vx8kx9aqxekvu4nm70zyiqhhcpgoixsczwr5v6b4bsukxaxm5sc46xt95k54xg7olacak24xj05970xuzq4bbeoipt275pwldknifg2nglgh66gyj3utsxfq9trk7vkf7tbnnvs7ql7acf9gtajp3z0g2vp0bzex9vwx07mczltt3nct8ww4s5mxg5dm5wknsdm5zyj5dxxxw2wumlqfz312qnxxwh7wo9hw0zdc69hbet24q03inazh576q2l5fhwkpzyx9dcb8q9q == \s\f\b\m\5\5\r\z\m\c\b\m\i\a\2\o\v\x\w\c\1\x\5\p\d\h\e\i\0\h\b\0\m\y\x\c\z\q\e\w\q\n\4\a\w\z\g\o\d\c\4\c\8\j\1\i\c\s\p\g\6\v\t\w\3\9\m\z\9\6\s\b\c\v\m\n\w\1\l\y\l\i\b\6\e\6\x\s\7\z\j\9\d\j\3\m\t\9\z\c\q\o\5\v\j\c\j\j\s\i\0\8\i\r\y\s\q\n\0\1\y\i\c\p\b\8\0\q\b\l\z\q\4\p\h\g\i\5\o\g\e\4\q\j\h\5\7\i\r\k\u\5\a\n\v\r\y\z\0\r\6\s\3\8\t\v\i\z\k\0\9\o\j\i\t\s\d\7\x\k\n\5\h\v\1\a\0\d\w\b\k\w\o\m\o\0\a\i\z\w\9\2\d\t\1\a\0\w\1\m\t\n\2\i\y\5\3\c\p\6\1\i\c\8\r\7\2\k\9\6\r\a\q\k\i\4\0\a\8\t\8\v\x\8\k\x\9\a\q\x\e\k\v\u\4\n\m\7\0\z\y\i\q\h\h\c\p\g\o\i\x\s\c\z\w\r\5\v\6\b\4\b\s\u\k\x\a\x\m\5\s\c\4\6\x\t\9\5\k\5\4\x\g\7\o\l\a\c\a\k\2\4\x\j\0\5\9\7\0\x\u\z\q\4\b\b\e\o\i\p\t\2\7\5\p\w\l\d\k\n\i\f\g\2\n\g\l\g\h\6\6\g\y\j\3\u\t\s\x\f\q\9\t\r\k\7\v\k\f\7\t\b\n\n\v\s\7\q\l\7\a\c\f\9\g\t\a\j\p\3\z\0\g\2\v\p\0\b\z\e\x\9\v\w\x\0\7\m\c\z\l\t\t\3\n\c\t\8\w\w\4\s\5\m\x\g\5\d\m\5\w\k\n\s\d\m\5\z\y\j\5\d\x\x\x\w\2\w\u\m\l\q\f\z\3\1\2\q\n\x\x\w\h\7\w\o\9\h\w\0\z\d\c\6\9\h\b\e\t\2\4\q\0\3\i\n\a\z\h\5\7\6\q\2\l\5\f\h\w\k\p\z\y\x\9\d\c\b\8\q\9\q ]] 00:08:17.981 08:41:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.981 08:41:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:17.981 [2024-11-20 08:41:48.808296] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:17.981 [2024-11-20 08:41:48.808464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60521 ] 00:08:18.240 [2024-11-20 08:41:48.960337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.240 [2024-11-20 08:41:49.039619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.240 [2024-11-20 08:41:49.111880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.522  [2024-11-20T08:41:49.437Z] Copying: 512/512 [B] (average 250 kBps) 00:08:18.522 00:08:18.522 08:41:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sfbm55rzmcbmia2ovxwc1x5pdhei0hb0myxczqewqn4awzgodc4c8j1icspg6vtw39mz96sbcvmnw1lylib6e6xs7zj9dj3mt9zcqo5vjcjjsi08irysqn01yicpb80qblzq4phgi5oge4qjh57irku5anvryz0r6s38tvizk09ojitsd7xkn5hv1a0dwbkwomo0aizw92dt1a0w1mtn2iy53cp61ic8r72k96raqki40a8t8vx8kx9aqxekvu4nm70zyiqhhcpgoixsczwr5v6b4bsukxaxm5sc46xt95k54xg7olacak24xj05970xuzq4bbeoipt275pwldknifg2nglgh66gyj3utsxfq9trk7vkf7tbnnvs7ql7acf9gtajp3z0g2vp0bzex9vwx07mczltt3nct8ww4s5mxg5dm5wknsdm5zyj5dxxxw2wumlqfz312qnxxwh7wo9hw0zdc69hbet24q03inazh576q2l5fhwkpzyx9dcb8q9q == \s\f\b\m\5\5\r\z\m\c\b\m\i\a\2\o\v\x\w\c\1\x\5\p\d\h\e\i\0\h\b\0\m\y\x\c\z\q\e\w\q\n\4\a\w\z\g\o\d\c\4\c\8\j\1\i\c\s\p\g\6\v\t\w\3\9\m\z\9\6\s\b\c\v\m\n\w\1\l\y\l\i\b\6\e\6\x\s\7\z\j\9\d\j\3\m\t\9\z\c\q\o\5\v\j\c\j\j\s\i\0\8\i\r\y\s\q\n\0\1\y\i\c\p\b\8\0\q\b\l\z\q\4\p\h\g\i\5\o\g\e\4\q\j\h\5\7\i\r\k\u\5\a\n\v\r\y\z\0\r\6\s\3\8\t\v\i\z\k\0\9\o\j\i\t\s\d\7\x\k\n\5\h\v\1\a\0\d\w\b\k\w\o\m\o\0\a\i\z\w\9\2\d\t\1\a\0\w\1\m\t\n\2\i\y\5\3\c\p\6\1\i\c\8\r\7\2\k\9\6\r\a\q\k\i\4\0\a\8\t\8\v\x\8\k\x\9\a\q\x\e\k\v\u\4\n\m\7\0\z\y\i\q\h\h\c\p\g\o\i\x\s\c\z\w\r\5\v\6\b\4\b\s\u\k\x\a\x\m\5\s\c\4\6\x\t\9\5\k\5\4\x\g\7\o\l\a\c\a\k\2\4\x\j\0\5\9\7\0\x\u\z\q\4\b\b\e\o\i\p\t\2\7\5\p\w\l\d\k\n\i\f\g\2\n\g\l\g\h\6\6\g\y\j\3\u\t\s\x\f\q\9\t\r\k\7\v\k\f\7\t\b\n\n\v\s\7\q\l\7\a\c\f\9\g\t\a\j\p\3\z\0\g\2\v\p\0\b\z\e\x\9\v\w\x\0\7\m\c\z\l\t\t\3\n\c\t\8\w\w\4\s\5\m\x\g\5\d\m\5\w\k\n\s\d\m\5\z\y\j\5\d\x\x\x\w\2\w\u\m\l\q\f\z\3\1\2\q\n\x\x\w\h\7\w\o\9\h\w\0\z\d\c\6\9\h\b\e\t\2\4\q\0\3\i\n\a\z\h\5\7\6\q\2\l\5\f\h\w\k\p\z\y\x\9\d\c\b\8\q\9\q ]] 00:08:18.522 08:41:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.522 08:41:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:18.780 [2024-11-20 08:41:49.465921] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:18.780 [2024-11-20 08:41:49.466035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60536 ] 00:08:18.780 [2024-11-20 08:41:49.617561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.039 [2024-11-20 08:41:49.695510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.039 [2024-11-20 08:41:49.766301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.039  [2024-11-20T08:41:50.211Z] Copying: 512/512 [B] (average 250 kBps) 00:08:19.296 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sfbm55rzmcbmia2ovxwc1x5pdhei0hb0myxczqewqn4awzgodc4c8j1icspg6vtw39mz96sbcvmnw1lylib6e6xs7zj9dj3mt9zcqo5vjcjjsi08irysqn01yicpb80qblzq4phgi5oge4qjh57irku5anvryz0r6s38tvizk09ojitsd7xkn5hv1a0dwbkwomo0aizw92dt1a0w1mtn2iy53cp61ic8r72k96raqki40a8t8vx8kx9aqxekvu4nm70zyiqhhcpgoixsczwr5v6b4bsukxaxm5sc46xt95k54xg7olacak24xj05970xuzq4bbeoipt275pwldknifg2nglgh66gyj3utsxfq9trk7vkf7tbnnvs7ql7acf9gtajp3z0g2vp0bzex9vwx07mczltt3nct8ww4s5mxg5dm5wknsdm5zyj5dxxxw2wumlqfz312qnxxwh7wo9hw0zdc69hbet24q03inazh576q2l5fhwkpzyx9dcb8q9q == \s\f\b\m\5\5\r\z\m\c\b\m\i\a\2\o\v\x\w\c\1\x\5\p\d\h\e\i\0\h\b\0\m\y\x\c\z\q\e\w\q\n\4\a\w\z\g\o\d\c\4\c\8\j\1\i\c\s\p\g\6\v\t\w\3\9\m\z\9\6\s\b\c\v\m\n\w\1\l\y\l\i\b\6\e\6\x\s\7\z\j\9\d\j\3\m\t\9\z\c\q\o\5\v\j\c\j\j\s\i\0\8\i\r\y\s\q\n\0\1\y\i\c\p\b\8\0\q\b\l\z\q\4\p\h\g\i\5\o\g\e\4\q\j\h\5\7\i\r\k\u\5\a\n\v\r\y\z\0\r\6\s\3\8\t\v\i\z\k\0\9\o\j\i\t\s\d\7\x\k\n\5\h\v\1\a\0\d\w\b\k\w\o\m\o\0\a\i\z\w\9\2\d\t\1\a\0\w\1\m\t\n\2\i\y\5\3\c\p\6\1\i\c\8\r\7\2\k\9\6\r\a\q\k\i\4\0\a\8\t\8\v\x\8\k\x\9\a\q\x\e\k\v\u\4\n\m\7\0\z\y\i\q\h\h\c\p\g\o\i\x\s\c\z\w\r\5\v\6\b\4\b\s\u\k\x\a\x\m\5\s\c\4\6\x\t\9\5\k\5\4\x\g\7\o\l\a\c\a\k\2\4\x\j\0\5\9\7\0\x\u\z\q\4\b\b\e\o\i\p\t\2\7\5\p\w\l\d\k\n\i\f\g\2\n\g\l\g\h\6\6\g\y\j\3\u\t\s\x\f\q\9\t\r\k\7\v\k\f\7\t\b\n\n\v\s\7\q\l\7\a\c\f\9\g\t\a\j\p\3\z\0\g\2\v\p\0\b\z\e\x\9\v\w\x\0\7\m\c\z\l\t\t\3\n\c\t\8\w\w\4\s\5\m\x\g\5\d\m\5\w\k\n\s\d\m\5\z\y\j\5\d\x\x\x\w\2\w\u\m\l\q\f\z\3\1\2\q\n\x\x\w\h\7\w\o\9\h\w\0\z\d\c\6\9\h\b\e\t\2\4\q\0\3\i\n\a\z\h\5\7\6\q\2\l\5\f\h\w\k\p\z\y\x\9\d\c\b\8\q\9\q ]] 00:08:19.297 00:08:19.297 real 0m5.331s 00:08:19.297 user 0m3.076s 00:08:19.297 sys 0m2.792s 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.297 ************************************ 00:08:19.297 END TEST dd_flags_misc 00:08:19.297 ************************************ 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:19.297 * Second test run, disabling liburing, forcing AIO 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:19.297 ************************************ 00:08:19.297 START TEST dd_flag_append_forced_aio 00:08:19.297 ************************************ 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=etsvggneljmj9jt7n5ovk02cot9m2sjv 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=uqnf03jlsgb9sh9ioasrp90i2x6s0bdc 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s etsvggneljmj9jt7n5ovk02cot9m2sjv 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s uqnf03jlsgb9sh9ioasrp90i2x6s0bdc 00:08:19.297 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:19.297 [2024-11-20 08:41:50.173668] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:19.297 [2024-11-20 08:41:50.173772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60565 ] 00:08:19.555 [2024-11-20 08:41:50.314832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.555 [2024-11-20 08:41:50.396826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.813 [2024-11-20 08:41:50.470878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.813  [2024-11-20T08:41:50.987Z] Copying: 32/32 [B] (average 31 kBps) 00:08:20.072 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ uqnf03jlsgb9sh9ioasrp90i2x6s0bdcetsvggneljmj9jt7n5ovk02cot9m2sjv == \u\q\n\f\0\3\j\l\s\g\b\9\s\h\9\i\o\a\s\r\p\9\0\i\2\x\6\s\0\b\d\c\e\t\s\v\g\g\n\e\l\j\m\j\9\j\t\7\n\5\o\v\k\0\2\c\o\t\9\m\2\s\j\v ]] 00:08:20.072 00:08:20.072 real 0m0.681s 00:08:20.072 user 0m0.380s 00:08:20.072 sys 0m0.179s 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.072 ************************************ 00:08:20.072 END TEST dd_flag_append_forced_aio 00:08:20.072 ************************************ 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:20.072 ************************************ 00:08:20.072 START TEST dd_flag_directory_forced_aio 00:08:20.072 ************************************ 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.072 08:41:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:20.072 [2024-11-20 08:41:50.901266] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:20.072 [2024-11-20 08:41:50.901380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60591 ] 00:08:20.331 [2024-11-20 08:41:51.047055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.331 [2024-11-20 08:41:51.127421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.331 [2024-11-20 08:41:51.201211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.590 [2024-11-20 08:41:51.252014] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:20.590 [2024-11-20 08:41:51.252091] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:20.590 [2024-11-20 08:41:51.252113] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.590 [2024-11-20 08:41:51.420238] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.849 08:41:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:20.849 [2024-11-20 08:41:51.557065] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:20.849 [2024-11-20 08:41:51.557166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60606 ] 00:08:20.849 [2024-11-20 08:41:51.706402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.108 [2024-11-20 08:41:51.796928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.108 [2024-11-20 08:41:51.873669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.108 [2024-11-20 08:41:51.925244] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:21.108 [2024-11-20 08:41:51.925317] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:21.108 [2024-11-20 08:41:51.925339] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.366 [2024-11-20 08:41:52.097470] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:21.366 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:21.366 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.366 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:21.366 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:21.366 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.367 00:08:21.367 real 0m1.341s 00:08:21.367 user 0m0.775s 00:08:21.367 sys 0m0.353s 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.367 ************************************ 00:08:21.367 END TEST dd_flag_directory_forced_aio 00:08:21.367 ************************************ 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:21.367 ************************************ 00:08:21.367 START TEST dd_flag_nofollow_forced_aio 00:08:21.367 ************************************ 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:21.367 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.625 [2024-11-20 08:41:52.311192] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:21.625 [2024-11-20 08:41:52.311334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 00:08:21.625 [2024-11-20 08:41:52.461358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.884 [2024-11-20 08:41:52.541506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.884 [2024-11-20 08:41:52.614077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.884 [2024-11-20 08:41:52.664245] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:21.884 [2024-11-20 08:41:52.664313] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:21.884 [2024-11-20 08:41:52.664335] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.143 [2024-11-20 08:41:52.833368] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.143 08:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:22.143 [2024-11-20 08:41:52.978159] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:22.143 [2024-11-20 08:41:52.978285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60644 ] 00:08:22.401 [2024-11-20 08:41:53.123088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.401 [2024-11-20 08:41:53.205745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.401 [2024-11-20 08:41:53.281393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.728 [2024-11-20 08:41:53.331211] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:22.728 [2024-11-20 08:41:53.331283] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:22.728 [2024-11-20 08:41:53.331306] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.728 [2024-11-20 08:41:53.497182] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:22.728 08:41:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:22.728 08:41:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.728 08:41:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:22.728 08:41:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:22.728 08:41:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:22.728 08:41:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.728 08:41:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:22.728 08:41:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:22.728 08:41:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:22.728 08:41:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.989 [2024-11-20 08:41:53.654775] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:22.989 [2024-11-20 08:41:53.654923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60657 ] 00:08:22.989 [2024-11-20 08:41:53.802425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.250 [2024-11-20 08:41:53.906116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.250 [2024-11-20 08:41:53.979969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.250  [2024-11-20T08:41:54.423Z] Copying: 512/512 [B] (average 500 kBps) 00:08:23.508 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ utxi5flz5role4tk5qbuyje8kghu8zkhodtsd22uzec1qlbklzd1foifoioci9chqcd9macngsqqoo4yphzptgifznh8h87cy4ka2r39tud1fx0jujqpcjv08m4st4ffsibjscndv24te48lrtk0aecxgfmxipxmtd2ukx968pxijj156kpb785dvet9h2ih60ztx44206f0415vk5hdw0gjq5rem0yq1khagnn4po1c6ijfu3xtesouk48wa0byk7hdfheb2q0nex9zifsjrpla3einw6ltxty1klu4ko2wnpbdu9k3w70zt4wul8cjueo33i35ujyxh5ur42tpuufswt34ech4m114mj0vuw28egemk56pxiqtd36h6u9kdb86n72i6a8z1z6wgnd1st6lw2vvfmtqmk0cljwbnkjq3z7ox27mtoqtnzh584mehhnc3tepv9bemeflhxny8rtvj5emn3xv4fp7lghzwo6g3pqfpmzithskihpsv7hl == \u\t\x\i\5\f\l\z\5\r\o\l\e\4\t\k\5\q\b\u\y\j\e\8\k\g\h\u\8\z\k\h\o\d\t\s\d\2\2\u\z\e\c\1\q\l\b\k\l\z\d\1\f\o\i\f\o\i\o\c\i\9\c\h\q\c\d\9\m\a\c\n\g\s\q\q\o\o\4\y\p\h\z\p\t\g\i\f\z\n\h\8\h\8\7\c\y\4\k\a\2\r\3\9\t\u\d\1\f\x\0\j\u\j\q\p\c\j\v\0\8\m\4\s\t\4\f\f\s\i\b\j\s\c\n\d\v\2\4\t\e\4\8\l\r\t\k\0\a\e\c\x\g\f\m\x\i\p\x\m\t\d\2\u\k\x\9\6\8\p\x\i\j\j\1\5\6\k\p\b\7\8\5\d\v\e\t\9\h\2\i\h\6\0\z\t\x\4\4\2\0\6\f\0\4\1\5\v\k\5\h\d\w\0\g\j\q\5\r\e\m\0\y\q\1\k\h\a\g\n\n\4\p\o\1\c\6\i\j\f\u\3\x\t\e\s\o\u\k\4\8\w\a\0\b\y\k\7\h\d\f\h\e\b\2\q\0\n\e\x\9\z\i\f\s\j\r\p\l\a\3\e\i\n\w\6\l\t\x\t\y\1\k\l\u\4\k\o\2\w\n\p\b\d\u\9\k\3\w\7\0\z\t\4\w\u\l\8\c\j\u\e\o\3\3\i\3\5\u\j\y\x\h\5\u\r\4\2\t\p\u\u\f\s\w\t\3\4\e\c\h\4\m\1\1\4\m\j\0\v\u\w\2\8\e\g\e\m\k\5\6\p\x\i\q\t\d\3\6\h\6\u\9\k\d\b\8\6\n\7\2\i\6\a\8\z\1\z\6\w\g\n\d\1\s\t\6\l\w\2\v\v\f\m\t\q\m\k\0\c\l\j\w\b\n\k\j\q\3\z\7\o\x\2\7\m\t\o\q\t\n\z\h\5\8\4\m\e\h\h\n\c\3\t\e\p\v\9\b\e\m\e\f\l\h\x\n\y\8\r\t\v\j\5\e\m\n\3\x\v\4\f\p\7\l\g\h\z\w\o\6\g\3\p\q\f\p\m\z\i\t\h\s\k\i\h\p\s\v\7\h\l ]] 00:08:23.508 00:08:23.508 real 0m2.094s 00:08:23.508 user 0m1.192s 00:08:23.508 sys 0m0.550s 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:23.508 ************************************ 00:08:23.508 END TEST dd_flag_nofollow_forced_aio 00:08:23.508 ************************************ 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:23.508 ************************************ 00:08:23.508 START TEST dd_flag_noatime_forced_aio 00:08:23.508 ************************************ 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732092114 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732092114 00:08:23.508 08:41:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:24.885 08:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.885 [2024-11-20 08:41:55.459759] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:24.885 [2024-11-20 08:41:55.459906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60703 ] 00:08:24.885 [2024-11-20 08:41:55.602230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.885 [2024-11-20 08:41:55.682254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.885 [2024-11-20 08:41:55.755197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.143  [2024-11-20T08:41:56.317Z] Copying: 512/512 [B] (average 500 kBps) 00:08:25.402 00:08:25.402 08:41:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.402 08:41:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732092114 )) 00:08:25.402 08:41:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.402 08:41:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732092114 )) 00:08:25.402 08:41:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.402 [2024-11-20 08:41:56.136133] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:25.402 [2024-11-20 08:41:56.136263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60709 ] 00:08:25.402 [2024-11-20 08:41:56.280392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.660 [2024-11-20 08:41:56.360337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.660 [2024-11-20 08:41:56.433207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.660  [2024-11-20T08:41:56.833Z] Copying: 512/512 [B] (average 500 kBps) 00:08:25.918 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732092116 )) 00:08:25.918 00:08:25.918 real 0m2.376s 00:08:25.918 user 0m0.761s 00:08:25.918 sys 0m0.372s 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:25.918 ************************************ 00:08:25.918 END TEST dd_flag_noatime_forced_aio 00:08:25.918 ************************************ 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:25.918 ************************************ 00:08:25.918 START TEST dd_flags_misc_forced_aio 00:08:25.918 ************************************ 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.918 08:41:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:26.176 [2024-11-20 08:41:56.869994] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:26.176 [2024-11-20 08:41:56.870107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60741 ] 00:08:26.176 [2024-11-20 08:41:57.015375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.435 [2024-11-20 08:41:57.094305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.435 [2024-11-20 08:41:57.167285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.435  [2024-11-20T08:41:57.607Z] Copying: 512/512 [B] (average 500 kBps) 00:08:26.692 00:08:26.692 08:41:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kk4efjdp7n5fokvllosrl34twd3lps3au3dv8bvqv4sdi65ksy88cpxdt9q8mdeb01hrspihr0b232dhz8277h83aq5i507px28395v5yth7tt1bnj58tz8deetff8zskv6htrpawa4lgk9n5yiavb317cdl3ionzh146cjrl10kwxkqkoi4p94rsz1heztow1fnn7k3axsefw0mdt7dm7qepu3uis7x6800v0te5it46evi39ba97ej7cds5fbrdd8dqe69e1be7j9sqzfdzlh1vkdrxfvzyx1iu8lg2smcurm2eq0k0jgfwkf69w7pvnyj45n7htuzlxoz99fxwnbdtk7tm4ddfmz7nm7iw83onbp62fpcbvaeukboef2eegvyphony3bwckg30c1xfe58ss6bs2oya5ct7zvdmf9lj5104y1lo2cxogzl3bg3ttku6e3a88otavizwnlcoaj1y406kfhwnxi3qofbn109ccejpaotnsyeca3a4yxi == \k\k\4\e\f\j\d\p\7\n\5\f\o\k\v\l\l\o\s\r\l\3\4\t\w\d\3\l\p\s\3\a\u\3\d\v\8\b\v\q\v\4\s\d\i\6\5\k\s\y\8\8\c\p\x\d\t\9\q\8\m\d\e\b\0\1\h\r\s\p\i\h\r\0\b\2\3\2\d\h\z\8\2\7\7\h\8\3\a\q\5\i\5\0\7\p\x\2\8\3\9\5\v\5\y\t\h\7\t\t\1\b\n\j\5\8\t\z\8\d\e\e\t\f\f\8\z\s\k\v\6\h\t\r\p\a\w\a\4\l\g\k\9\n\5\y\i\a\v\b\3\1\7\c\d\l\3\i\o\n\z\h\1\4\6\c\j\r\l\1\0\k\w\x\k\q\k\o\i\4\p\9\4\r\s\z\1\h\e\z\t\o\w\1\f\n\n\7\k\3\a\x\s\e\f\w\0\m\d\t\7\d\m\7\q\e\p\u\3\u\i\s\7\x\6\8\0\0\v\0\t\e\5\i\t\4\6\e\v\i\3\9\b\a\9\7\e\j\7\c\d\s\5\f\b\r\d\d\8\d\q\e\6\9\e\1\b\e\7\j\9\s\q\z\f\d\z\l\h\1\v\k\d\r\x\f\v\z\y\x\1\i\u\8\l\g\2\s\m\c\u\r\m\2\e\q\0\k\0\j\g\f\w\k\f\6\9\w\7\p\v\n\y\j\4\5\n\7\h\t\u\z\l\x\o\z\9\9\f\x\w\n\b\d\t\k\7\t\m\4\d\d\f\m\z\7\n\m\7\i\w\8\3\o\n\b\p\6\2\f\p\c\b\v\a\e\u\k\b\o\e\f\2\e\e\g\v\y\p\h\o\n\y\3\b\w\c\k\g\3\0\c\1\x\f\e\5\8\s\s\6\b\s\2\o\y\a\5\c\t\7\z\v\d\m\f\9\l\j\5\1\0\4\y\1\l\o\2\c\x\o\g\z\l\3\b\g\3\t\t\k\u\6\e\3\a\8\8\o\t\a\v\i\z\w\n\l\c\o\a\j\1\y\4\0\6\k\f\h\w\n\x\i\3\q\o\f\b\n\1\0\9\c\c\e\j\p\a\o\t\n\s\y\e\c\a\3\a\4\y\x\i ]] 00:08:26.692 08:41:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.692 08:41:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:26.692 [2024-11-20 08:41:57.544331] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:26.692 [2024-11-20 08:41:57.544452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60754 ] 00:08:26.949 [2024-11-20 08:41:57.686409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.949 [2024-11-20 08:41:57.768020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.949 [2024-11-20 08:41:57.841613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.206  [2024-11-20T08:41:58.384Z] Copying: 512/512 [B] (average 500 kBps) 00:08:27.469 00:08:27.470 08:41:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kk4efjdp7n5fokvllosrl34twd3lps3au3dv8bvqv4sdi65ksy88cpxdt9q8mdeb01hrspihr0b232dhz8277h83aq5i507px28395v5yth7tt1bnj58tz8deetff8zskv6htrpawa4lgk9n5yiavb317cdl3ionzh146cjrl10kwxkqkoi4p94rsz1heztow1fnn7k3axsefw0mdt7dm7qepu3uis7x6800v0te5it46evi39ba97ej7cds5fbrdd8dqe69e1be7j9sqzfdzlh1vkdrxfvzyx1iu8lg2smcurm2eq0k0jgfwkf69w7pvnyj45n7htuzlxoz99fxwnbdtk7tm4ddfmz7nm7iw83onbp62fpcbvaeukboef2eegvyphony3bwckg30c1xfe58ss6bs2oya5ct7zvdmf9lj5104y1lo2cxogzl3bg3ttku6e3a88otavizwnlcoaj1y406kfhwnxi3qofbn109ccejpaotnsyeca3a4yxi == \k\k\4\e\f\j\d\p\7\n\5\f\o\k\v\l\l\o\s\r\l\3\4\t\w\d\3\l\p\s\3\a\u\3\d\v\8\b\v\q\v\4\s\d\i\6\5\k\s\y\8\8\c\p\x\d\t\9\q\8\m\d\e\b\0\1\h\r\s\p\i\h\r\0\b\2\3\2\d\h\z\8\2\7\7\h\8\3\a\q\5\i\5\0\7\p\x\2\8\3\9\5\v\5\y\t\h\7\t\t\1\b\n\j\5\8\t\z\8\d\e\e\t\f\f\8\z\s\k\v\6\h\t\r\p\a\w\a\4\l\g\k\9\n\5\y\i\a\v\b\3\1\7\c\d\l\3\i\o\n\z\h\1\4\6\c\j\r\l\1\0\k\w\x\k\q\k\o\i\4\p\9\4\r\s\z\1\h\e\z\t\o\w\1\f\n\n\7\k\3\a\x\s\e\f\w\0\m\d\t\7\d\m\7\q\e\p\u\3\u\i\s\7\x\6\8\0\0\v\0\t\e\5\i\t\4\6\e\v\i\3\9\b\a\9\7\e\j\7\c\d\s\5\f\b\r\d\d\8\d\q\e\6\9\e\1\b\e\7\j\9\s\q\z\f\d\z\l\h\1\v\k\d\r\x\f\v\z\y\x\1\i\u\8\l\g\2\s\m\c\u\r\m\2\e\q\0\k\0\j\g\f\w\k\f\6\9\w\7\p\v\n\y\j\4\5\n\7\h\t\u\z\l\x\o\z\9\9\f\x\w\n\b\d\t\k\7\t\m\4\d\d\f\m\z\7\n\m\7\i\w\8\3\o\n\b\p\6\2\f\p\c\b\v\a\e\u\k\b\o\e\f\2\e\e\g\v\y\p\h\o\n\y\3\b\w\c\k\g\3\0\c\1\x\f\e\5\8\s\s\6\b\s\2\o\y\a\5\c\t\7\z\v\d\m\f\9\l\j\5\1\0\4\y\1\l\o\2\c\x\o\g\z\l\3\b\g\3\t\t\k\u\6\e\3\a\8\8\o\t\a\v\i\z\w\n\l\c\o\a\j\1\y\4\0\6\k\f\h\w\n\x\i\3\q\o\f\b\n\1\0\9\c\c\e\j\p\a\o\t\n\s\y\e\c\a\3\a\4\y\x\i ]] 00:08:27.470 08:41:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.470 08:41:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:27.470 [2024-11-20 08:41:58.240299] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:27.470 [2024-11-20 08:41:58.240459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60756 ] 00:08:27.727 [2024-11-20 08:41:58.395095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.727 [2024-11-20 08:41:58.474906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.727 [2024-11-20 08:41:58.546096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.727  [2024-11-20T08:41:58.900Z] Copying: 512/512 [B] (average 250 kBps) 00:08:27.985 00:08:27.985 08:41:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kk4efjdp7n5fokvllosrl34twd3lps3au3dv8bvqv4sdi65ksy88cpxdt9q8mdeb01hrspihr0b232dhz8277h83aq5i507px28395v5yth7tt1bnj58tz8deetff8zskv6htrpawa4lgk9n5yiavb317cdl3ionzh146cjrl10kwxkqkoi4p94rsz1heztow1fnn7k3axsefw0mdt7dm7qepu3uis7x6800v0te5it46evi39ba97ej7cds5fbrdd8dqe69e1be7j9sqzfdzlh1vkdrxfvzyx1iu8lg2smcurm2eq0k0jgfwkf69w7pvnyj45n7htuzlxoz99fxwnbdtk7tm4ddfmz7nm7iw83onbp62fpcbvaeukboef2eegvyphony3bwckg30c1xfe58ss6bs2oya5ct7zvdmf9lj5104y1lo2cxogzl3bg3ttku6e3a88otavizwnlcoaj1y406kfhwnxi3qofbn109ccejpaotnsyeca3a4yxi == \k\k\4\e\f\j\d\p\7\n\5\f\o\k\v\l\l\o\s\r\l\3\4\t\w\d\3\l\p\s\3\a\u\3\d\v\8\b\v\q\v\4\s\d\i\6\5\k\s\y\8\8\c\p\x\d\t\9\q\8\m\d\e\b\0\1\h\r\s\p\i\h\r\0\b\2\3\2\d\h\z\8\2\7\7\h\8\3\a\q\5\i\5\0\7\p\x\2\8\3\9\5\v\5\y\t\h\7\t\t\1\b\n\j\5\8\t\z\8\d\e\e\t\f\f\8\z\s\k\v\6\h\t\r\p\a\w\a\4\l\g\k\9\n\5\y\i\a\v\b\3\1\7\c\d\l\3\i\o\n\z\h\1\4\6\c\j\r\l\1\0\k\w\x\k\q\k\o\i\4\p\9\4\r\s\z\1\h\e\z\t\o\w\1\f\n\n\7\k\3\a\x\s\e\f\w\0\m\d\t\7\d\m\7\q\e\p\u\3\u\i\s\7\x\6\8\0\0\v\0\t\e\5\i\t\4\6\e\v\i\3\9\b\a\9\7\e\j\7\c\d\s\5\f\b\r\d\d\8\d\q\e\6\9\e\1\b\e\7\j\9\s\q\z\f\d\z\l\h\1\v\k\d\r\x\f\v\z\y\x\1\i\u\8\l\g\2\s\m\c\u\r\m\2\e\q\0\k\0\j\g\f\w\k\f\6\9\w\7\p\v\n\y\j\4\5\n\7\h\t\u\z\l\x\o\z\9\9\f\x\w\n\b\d\t\k\7\t\m\4\d\d\f\m\z\7\n\m\7\i\w\8\3\o\n\b\p\6\2\f\p\c\b\v\a\e\u\k\b\o\e\f\2\e\e\g\v\y\p\h\o\n\y\3\b\w\c\k\g\3\0\c\1\x\f\e\5\8\s\s\6\b\s\2\o\y\a\5\c\t\7\z\v\d\m\f\9\l\j\5\1\0\4\y\1\l\o\2\c\x\o\g\z\l\3\b\g\3\t\t\k\u\6\e\3\a\8\8\o\t\a\v\i\z\w\n\l\c\o\a\j\1\y\4\0\6\k\f\h\w\n\x\i\3\q\o\f\b\n\1\0\9\c\c\e\j\p\a\o\t\n\s\y\e\c\a\3\a\4\y\x\i ]] 00:08:27.985 08:41:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.985 08:41:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:28.243 [2024-11-20 08:41:58.928571] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:28.243 [2024-11-20 08:41:58.928686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60769 ] 00:08:28.243 [2024-11-20 08:41:59.071724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.243 [2024-11-20 08:41:59.150901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.516 [2024-11-20 08:41:59.222233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.517  [2024-11-20T08:41:59.696Z] Copying: 512/512 [B] (average 500 kBps) 00:08:28.781 00:08:28.781 08:41:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kk4efjdp7n5fokvllosrl34twd3lps3au3dv8bvqv4sdi65ksy88cpxdt9q8mdeb01hrspihr0b232dhz8277h83aq5i507px28395v5yth7tt1bnj58tz8deetff8zskv6htrpawa4lgk9n5yiavb317cdl3ionzh146cjrl10kwxkqkoi4p94rsz1heztow1fnn7k3axsefw0mdt7dm7qepu3uis7x6800v0te5it46evi39ba97ej7cds5fbrdd8dqe69e1be7j9sqzfdzlh1vkdrxfvzyx1iu8lg2smcurm2eq0k0jgfwkf69w7pvnyj45n7htuzlxoz99fxwnbdtk7tm4ddfmz7nm7iw83onbp62fpcbvaeukboef2eegvyphony3bwckg30c1xfe58ss6bs2oya5ct7zvdmf9lj5104y1lo2cxogzl3bg3ttku6e3a88otavizwnlcoaj1y406kfhwnxi3qofbn109ccejpaotnsyeca3a4yxi == \k\k\4\e\f\j\d\p\7\n\5\f\o\k\v\l\l\o\s\r\l\3\4\t\w\d\3\l\p\s\3\a\u\3\d\v\8\b\v\q\v\4\s\d\i\6\5\k\s\y\8\8\c\p\x\d\t\9\q\8\m\d\e\b\0\1\h\r\s\p\i\h\r\0\b\2\3\2\d\h\z\8\2\7\7\h\8\3\a\q\5\i\5\0\7\p\x\2\8\3\9\5\v\5\y\t\h\7\t\t\1\b\n\j\5\8\t\z\8\d\e\e\t\f\f\8\z\s\k\v\6\h\t\r\p\a\w\a\4\l\g\k\9\n\5\y\i\a\v\b\3\1\7\c\d\l\3\i\o\n\z\h\1\4\6\c\j\r\l\1\0\k\w\x\k\q\k\o\i\4\p\9\4\r\s\z\1\h\e\z\t\o\w\1\f\n\n\7\k\3\a\x\s\e\f\w\0\m\d\t\7\d\m\7\q\e\p\u\3\u\i\s\7\x\6\8\0\0\v\0\t\e\5\i\t\4\6\e\v\i\3\9\b\a\9\7\e\j\7\c\d\s\5\f\b\r\d\d\8\d\q\e\6\9\e\1\b\e\7\j\9\s\q\z\f\d\z\l\h\1\v\k\d\r\x\f\v\z\y\x\1\i\u\8\l\g\2\s\m\c\u\r\m\2\e\q\0\k\0\j\g\f\w\k\f\6\9\w\7\p\v\n\y\j\4\5\n\7\h\t\u\z\l\x\o\z\9\9\f\x\w\n\b\d\t\k\7\t\m\4\d\d\f\m\z\7\n\m\7\i\w\8\3\o\n\b\p\6\2\f\p\c\b\v\a\e\u\k\b\o\e\f\2\e\e\g\v\y\p\h\o\n\y\3\b\w\c\k\g\3\0\c\1\x\f\e\5\8\s\s\6\b\s\2\o\y\a\5\c\t\7\z\v\d\m\f\9\l\j\5\1\0\4\y\1\l\o\2\c\x\o\g\z\l\3\b\g\3\t\t\k\u\6\e\3\a\8\8\o\t\a\v\i\z\w\n\l\c\o\a\j\1\y\4\0\6\k\f\h\w\n\x\i\3\q\o\f\b\n\1\0\9\c\c\e\j\p\a\o\t\n\s\y\e\c\a\3\a\4\y\x\i ]] 00:08:28.781 08:41:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:28.781 08:41:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:28.781 08:41:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:28.781 08:41:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:28.781 08:41:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:28.781 08:41:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:28.781 [2024-11-20 08:41:59.629887] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:28.781 [2024-11-20 08:41:59.630015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60778 ] 00:08:29.041 [2024-11-20 08:41:59.783908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.041 [2024-11-20 08:41:59.870578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.041 [2024-11-20 08:41:59.946010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.299  [2024-11-20T08:42:00.472Z] Copying: 512/512 [B] (average 500 kBps) 00:08:29.557 00:08:29.557 08:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f4rmlbaxk676lcq0h4i40t6m30it5mb1i3w4lc080thpb9lct6tha5kcl7j5mixxg12fy7wmxvn244ojpf0yf6smurysj7c03pgtodnug6qjzzypf3372xmc6z9z04d8969fu9fnyguxwnjaviojhg0ir38bw0m1u6dpiv641sl8nlt50dx20f7vj2cxrqu94gidn69f9fsq6idt9tpevf370ws9hfxibmjnifxzy79sd91s67bb315hu5pptbkss42tgbz32q66d17c39llh3gp81d0pmvsjpvv3y9xwkw9fzpl1y4p6m8ekp5qthv4zfri06zv0blaj7d45ibtqa9c8rs29vr5xuqttwmujmf24bg0j281v3morqfnz64p4cjlq3hmto0lyw789shdvamgoyqlna6hu0xoll7ezzyy6wtgnjwrdmtfmjemq8irbqtawdk9gtes7yd5l7mmv980qrocv0t0lgk6ljgcc1r5j8dapojbptbcrvclj4ug == \f\4\r\m\l\b\a\x\k\6\7\6\l\c\q\0\h\4\i\4\0\t\6\m\3\0\i\t\5\m\b\1\i\3\w\4\l\c\0\8\0\t\h\p\b\9\l\c\t\6\t\h\a\5\k\c\l\7\j\5\m\i\x\x\g\1\2\f\y\7\w\m\x\v\n\2\4\4\o\j\p\f\0\y\f\6\s\m\u\r\y\s\j\7\c\0\3\p\g\t\o\d\n\u\g\6\q\j\z\z\y\p\f\3\3\7\2\x\m\c\6\z\9\z\0\4\d\8\9\6\9\f\u\9\f\n\y\g\u\x\w\n\j\a\v\i\o\j\h\g\0\i\r\3\8\b\w\0\m\1\u\6\d\p\i\v\6\4\1\s\l\8\n\l\t\5\0\d\x\2\0\f\7\v\j\2\c\x\r\q\u\9\4\g\i\d\n\6\9\f\9\f\s\q\6\i\d\t\9\t\p\e\v\f\3\7\0\w\s\9\h\f\x\i\b\m\j\n\i\f\x\z\y\7\9\s\d\9\1\s\6\7\b\b\3\1\5\h\u\5\p\p\t\b\k\s\s\4\2\t\g\b\z\3\2\q\6\6\d\1\7\c\3\9\l\l\h\3\g\p\8\1\d\0\p\m\v\s\j\p\v\v\3\y\9\x\w\k\w\9\f\z\p\l\1\y\4\p\6\m\8\e\k\p\5\q\t\h\v\4\z\f\r\i\0\6\z\v\0\b\l\a\j\7\d\4\5\i\b\t\q\a\9\c\8\r\s\2\9\v\r\5\x\u\q\t\t\w\m\u\j\m\f\2\4\b\g\0\j\2\8\1\v\3\m\o\r\q\f\n\z\6\4\p\4\c\j\l\q\3\h\m\t\o\0\l\y\w\7\8\9\s\h\d\v\a\m\g\o\y\q\l\n\a\6\h\u\0\x\o\l\l\7\e\z\z\y\y\6\w\t\g\n\j\w\r\d\m\t\f\m\j\e\m\q\8\i\r\b\q\t\a\w\d\k\9\g\t\e\s\7\y\d\5\l\7\m\m\v\9\8\0\q\r\o\c\v\0\t\0\l\g\k\6\l\j\g\c\c\1\r\5\j\8\d\a\p\o\j\b\p\t\b\c\r\v\c\l\j\4\u\g ]] 00:08:29.557 08:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:29.557 08:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:29.557 [2024-11-20 08:42:00.331381] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:29.557 [2024-11-20 08:42:00.331521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60784 ] 00:08:29.816 [2024-11-20 08:42:00.474450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.816 [2024-11-20 08:42:00.574224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.816 [2024-11-20 08:42:00.647500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.816  [2024-11-20T08:42:00.989Z] Copying: 512/512 [B] (average 500 kBps) 00:08:30.074 00:08:30.074 08:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f4rmlbaxk676lcq0h4i40t6m30it5mb1i3w4lc080thpb9lct6tha5kcl7j5mixxg12fy7wmxvn244ojpf0yf6smurysj7c03pgtodnug6qjzzypf3372xmc6z9z04d8969fu9fnyguxwnjaviojhg0ir38bw0m1u6dpiv641sl8nlt50dx20f7vj2cxrqu94gidn69f9fsq6idt9tpevf370ws9hfxibmjnifxzy79sd91s67bb315hu5pptbkss42tgbz32q66d17c39llh3gp81d0pmvsjpvv3y9xwkw9fzpl1y4p6m8ekp5qthv4zfri06zv0blaj7d45ibtqa9c8rs29vr5xuqttwmujmf24bg0j281v3morqfnz64p4cjlq3hmto0lyw789shdvamgoyqlna6hu0xoll7ezzyy6wtgnjwrdmtfmjemq8irbqtawdk9gtes7yd5l7mmv980qrocv0t0lgk6ljgcc1r5j8dapojbptbcrvclj4ug == \f\4\r\m\l\b\a\x\k\6\7\6\l\c\q\0\h\4\i\4\0\t\6\m\3\0\i\t\5\m\b\1\i\3\w\4\l\c\0\8\0\t\h\p\b\9\l\c\t\6\t\h\a\5\k\c\l\7\j\5\m\i\x\x\g\1\2\f\y\7\w\m\x\v\n\2\4\4\o\j\p\f\0\y\f\6\s\m\u\r\y\s\j\7\c\0\3\p\g\t\o\d\n\u\g\6\q\j\z\z\y\p\f\3\3\7\2\x\m\c\6\z\9\z\0\4\d\8\9\6\9\f\u\9\f\n\y\g\u\x\w\n\j\a\v\i\o\j\h\g\0\i\r\3\8\b\w\0\m\1\u\6\d\p\i\v\6\4\1\s\l\8\n\l\t\5\0\d\x\2\0\f\7\v\j\2\c\x\r\q\u\9\4\g\i\d\n\6\9\f\9\f\s\q\6\i\d\t\9\t\p\e\v\f\3\7\0\w\s\9\h\f\x\i\b\m\j\n\i\f\x\z\y\7\9\s\d\9\1\s\6\7\b\b\3\1\5\h\u\5\p\p\t\b\k\s\s\4\2\t\g\b\z\3\2\q\6\6\d\1\7\c\3\9\l\l\h\3\g\p\8\1\d\0\p\m\v\s\j\p\v\v\3\y\9\x\w\k\w\9\f\z\p\l\1\y\4\p\6\m\8\e\k\p\5\q\t\h\v\4\z\f\r\i\0\6\z\v\0\b\l\a\j\7\d\4\5\i\b\t\q\a\9\c\8\r\s\2\9\v\r\5\x\u\q\t\t\w\m\u\j\m\f\2\4\b\g\0\j\2\8\1\v\3\m\o\r\q\f\n\z\6\4\p\4\c\j\l\q\3\h\m\t\o\0\l\y\w\7\8\9\s\h\d\v\a\m\g\o\y\q\l\n\a\6\h\u\0\x\o\l\l\7\e\z\z\y\y\6\w\t\g\n\j\w\r\d\m\t\f\m\j\e\m\q\8\i\r\b\q\t\a\w\d\k\9\g\t\e\s\7\y\d\5\l\7\m\m\v\9\8\0\q\r\o\c\v\0\t\0\l\g\k\6\l\j\g\c\c\1\r\5\j\8\d\a\p\o\j\b\p\t\b\c\r\v\c\l\j\4\u\g ]] 00:08:30.074 08:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:30.074 08:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:30.332 [2024-11-20 08:42:01.026518] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:30.333 [2024-11-20 08:42:01.026643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60797 ] 00:08:30.333 [2024-11-20 08:42:01.169784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.591 [2024-11-20 08:42:01.248110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.591 [2024-11-20 08:42:01.320911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.591  [2024-11-20T08:42:01.765Z] Copying: 512/512 [B] (average 500 kBps) 00:08:30.850 00:08:30.850 08:42:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f4rmlbaxk676lcq0h4i40t6m30it5mb1i3w4lc080thpb9lct6tha5kcl7j5mixxg12fy7wmxvn244ojpf0yf6smurysj7c03pgtodnug6qjzzypf3372xmc6z9z04d8969fu9fnyguxwnjaviojhg0ir38bw0m1u6dpiv641sl8nlt50dx20f7vj2cxrqu94gidn69f9fsq6idt9tpevf370ws9hfxibmjnifxzy79sd91s67bb315hu5pptbkss42tgbz32q66d17c39llh3gp81d0pmvsjpvv3y9xwkw9fzpl1y4p6m8ekp5qthv4zfri06zv0blaj7d45ibtqa9c8rs29vr5xuqttwmujmf24bg0j281v3morqfnz64p4cjlq3hmto0lyw789shdvamgoyqlna6hu0xoll7ezzyy6wtgnjwrdmtfmjemq8irbqtawdk9gtes7yd5l7mmv980qrocv0t0lgk6ljgcc1r5j8dapojbptbcrvclj4ug == \f\4\r\m\l\b\a\x\k\6\7\6\l\c\q\0\h\4\i\4\0\t\6\m\3\0\i\t\5\m\b\1\i\3\w\4\l\c\0\8\0\t\h\p\b\9\l\c\t\6\t\h\a\5\k\c\l\7\j\5\m\i\x\x\g\1\2\f\y\7\w\m\x\v\n\2\4\4\o\j\p\f\0\y\f\6\s\m\u\r\y\s\j\7\c\0\3\p\g\t\o\d\n\u\g\6\q\j\z\z\y\p\f\3\3\7\2\x\m\c\6\z\9\z\0\4\d\8\9\6\9\f\u\9\f\n\y\g\u\x\w\n\j\a\v\i\o\j\h\g\0\i\r\3\8\b\w\0\m\1\u\6\d\p\i\v\6\4\1\s\l\8\n\l\t\5\0\d\x\2\0\f\7\v\j\2\c\x\r\q\u\9\4\g\i\d\n\6\9\f\9\f\s\q\6\i\d\t\9\t\p\e\v\f\3\7\0\w\s\9\h\f\x\i\b\m\j\n\i\f\x\z\y\7\9\s\d\9\1\s\6\7\b\b\3\1\5\h\u\5\p\p\t\b\k\s\s\4\2\t\g\b\z\3\2\q\6\6\d\1\7\c\3\9\l\l\h\3\g\p\8\1\d\0\p\m\v\s\j\p\v\v\3\y\9\x\w\k\w\9\f\z\p\l\1\y\4\p\6\m\8\e\k\p\5\q\t\h\v\4\z\f\r\i\0\6\z\v\0\b\l\a\j\7\d\4\5\i\b\t\q\a\9\c\8\r\s\2\9\v\r\5\x\u\q\t\t\w\m\u\j\m\f\2\4\b\g\0\j\2\8\1\v\3\m\o\r\q\f\n\z\6\4\p\4\c\j\l\q\3\h\m\t\o\0\l\y\w\7\8\9\s\h\d\v\a\m\g\o\y\q\l\n\a\6\h\u\0\x\o\l\l\7\e\z\z\y\y\6\w\t\g\n\j\w\r\d\m\t\f\m\j\e\m\q\8\i\r\b\q\t\a\w\d\k\9\g\t\e\s\7\y\d\5\l\7\m\m\v\9\8\0\q\r\o\c\v\0\t\0\l\g\k\6\l\j\g\c\c\1\r\5\j\8\d\a\p\o\j\b\p\t\b\c\r\v\c\l\j\4\u\g ]] 00:08:30.850 08:42:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:30.850 08:42:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:30.850 [2024-11-20 08:42:01.702276] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:30.850 [2024-11-20 08:42:01.702383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60805 ] 00:08:31.108 [2024-11-20 08:42:01.847418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.108 [2024-11-20 08:42:01.930457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.108 [2024-11-20 08:42:02.001616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.367  [2024-11-20T08:42:02.541Z] Copying: 512/512 [B] (average 166 kBps) 00:08:31.626 00:08:31.626 08:42:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ f4rmlbaxk676lcq0h4i40t6m30it5mb1i3w4lc080thpb9lct6tha5kcl7j5mixxg12fy7wmxvn244ojpf0yf6smurysj7c03pgtodnug6qjzzypf3372xmc6z9z04d8969fu9fnyguxwnjaviojhg0ir38bw0m1u6dpiv641sl8nlt50dx20f7vj2cxrqu94gidn69f9fsq6idt9tpevf370ws9hfxibmjnifxzy79sd91s67bb315hu5pptbkss42tgbz32q66d17c39llh3gp81d0pmvsjpvv3y9xwkw9fzpl1y4p6m8ekp5qthv4zfri06zv0blaj7d45ibtqa9c8rs29vr5xuqttwmujmf24bg0j281v3morqfnz64p4cjlq3hmto0lyw789shdvamgoyqlna6hu0xoll7ezzyy6wtgnjwrdmtfmjemq8irbqtawdk9gtes7yd5l7mmv980qrocv0t0lgk6ljgcc1r5j8dapojbptbcrvclj4ug == \f\4\r\m\l\b\a\x\k\6\7\6\l\c\q\0\h\4\i\4\0\t\6\m\3\0\i\t\5\m\b\1\i\3\w\4\l\c\0\8\0\t\h\p\b\9\l\c\t\6\t\h\a\5\k\c\l\7\j\5\m\i\x\x\g\1\2\f\y\7\w\m\x\v\n\2\4\4\o\j\p\f\0\y\f\6\s\m\u\r\y\s\j\7\c\0\3\p\g\t\o\d\n\u\g\6\q\j\z\z\y\p\f\3\3\7\2\x\m\c\6\z\9\z\0\4\d\8\9\6\9\f\u\9\f\n\y\g\u\x\w\n\j\a\v\i\o\j\h\g\0\i\r\3\8\b\w\0\m\1\u\6\d\p\i\v\6\4\1\s\l\8\n\l\t\5\0\d\x\2\0\f\7\v\j\2\c\x\r\q\u\9\4\g\i\d\n\6\9\f\9\f\s\q\6\i\d\t\9\t\p\e\v\f\3\7\0\w\s\9\h\f\x\i\b\m\j\n\i\f\x\z\y\7\9\s\d\9\1\s\6\7\b\b\3\1\5\h\u\5\p\p\t\b\k\s\s\4\2\t\g\b\z\3\2\q\6\6\d\1\7\c\3\9\l\l\h\3\g\p\8\1\d\0\p\m\v\s\j\p\v\v\3\y\9\x\w\k\w\9\f\z\p\l\1\y\4\p\6\m\8\e\k\p\5\q\t\h\v\4\z\f\r\i\0\6\z\v\0\b\l\a\j\7\d\4\5\i\b\t\q\a\9\c\8\r\s\2\9\v\r\5\x\u\q\t\t\w\m\u\j\m\f\2\4\b\g\0\j\2\8\1\v\3\m\o\r\q\f\n\z\6\4\p\4\c\j\l\q\3\h\m\t\o\0\l\y\w\7\8\9\s\h\d\v\a\m\g\o\y\q\l\n\a\6\h\u\0\x\o\l\l\7\e\z\z\y\y\6\w\t\g\n\j\w\r\d\m\t\f\m\j\e\m\q\8\i\r\b\q\t\a\w\d\k\9\g\t\e\s\7\y\d\5\l\7\m\m\v\9\8\0\q\r\o\c\v\0\t\0\l\g\k\6\l\j\g\c\c\1\r\5\j\8\d\a\p\o\j\b\p\t\b\c\r\v\c\l\j\4\u\g ]] 00:08:31.626 00:08:31.626 real 0m5.535s 00:08:31.626 user 0m3.184s 00:08:31.626 sys 0m1.360s 00:08:31.626 08:42:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.626 08:42:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:31.626 ************************************ 00:08:31.626 END TEST dd_flags_misc_forced_aio 00:08:31.626 ************************************ 00:08:31.626 08:42:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:31.626 08:42:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:31.626 08:42:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:31.626 00:08:31.626 real 0m24.408s 00:08:31.626 user 0m12.707s 00:08:31.626 sys 0m8.140s 00:08:31.626 08:42:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.626 ************************************ 00:08:31.626 END TEST spdk_dd_posix 00:08:31.626 08:42:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:31.626 ************************************ 00:08:31.626 08:42:02 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:31.626 08:42:02 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.626 08:42:02 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.626 08:42:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:31.626 ************************************ 00:08:31.626 START TEST spdk_dd_malloc 00:08:31.626 ************************************ 00:08:31.626 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:31.626 * Looking for test storage... 00:08:31.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:31.626 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.626 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.626 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.885 --rc genhtml_branch_coverage=1 00:08:31.885 --rc genhtml_function_coverage=1 00:08:31.885 --rc genhtml_legend=1 00:08:31.885 --rc geninfo_all_blocks=1 00:08:31.885 --rc geninfo_unexecuted_blocks=1 00:08:31.885 00:08:31.885 ' 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.885 --rc genhtml_branch_coverage=1 00:08:31.885 --rc genhtml_function_coverage=1 00:08:31.885 --rc genhtml_legend=1 00:08:31.885 --rc geninfo_all_blocks=1 00:08:31.885 --rc geninfo_unexecuted_blocks=1 00:08:31.885 00:08:31.885 ' 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.885 --rc genhtml_branch_coverage=1 00:08:31.885 --rc genhtml_function_coverage=1 00:08:31.885 --rc genhtml_legend=1 00:08:31.885 --rc geninfo_all_blocks=1 00:08:31.885 --rc geninfo_unexecuted_blocks=1 00:08:31.885 00:08:31.885 ' 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.885 --rc genhtml_branch_coverage=1 00:08:31.885 --rc genhtml_function_coverage=1 00:08:31.885 --rc genhtml_legend=1 00:08:31.885 --rc geninfo_all_blocks=1 00:08:31.885 --rc geninfo_unexecuted_blocks=1 00:08:31.885 00:08:31.885 ' 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.885 08:42:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:31.886 ************************************ 00:08:31.886 START TEST dd_malloc_copy 00:08:31.886 ************************************ 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:31.886 08:42:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:31.886 [2024-11-20 08:42:02.690480] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:31.886 [2024-11-20 08:42:02.691230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60887 ] 00:08:31.886 { 00:08:31.886 "subsystems": [ 00:08:31.886 { 00:08:31.886 "subsystem": "bdev", 00:08:31.886 "config": [ 00:08:31.886 { 00:08:31.886 "params": { 00:08:31.886 "block_size": 512, 00:08:31.886 "num_blocks": 1048576, 00:08:31.886 "name": "malloc0" 00:08:31.886 }, 00:08:31.886 "method": "bdev_malloc_create" 00:08:31.886 }, 00:08:31.886 { 00:08:31.886 "params": { 00:08:31.886 "block_size": 512, 00:08:31.886 "num_blocks": 1048576, 00:08:31.886 "name": "malloc1" 00:08:31.886 }, 00:08:31.886 "method": "bdev_malloc_create" 00:08:31.886 }, 00:08:31.886 { 00:08:31.886 "method": "bdev_wait_for_examine" 00:08:31.886 } 00:08:31.886 ] 00:08:31.886 } 00:08:31.886 ] 00:08:31.886 } 00:08:32.144 [2024-11-20 08:42:02.837230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.144 [2024-11-20 08:42:02.917282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.144 [2024-11-20 08:42:02.988712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.520  [2024-11-20T08:42:05.837Z] Copying: 198/512 [MB] (198 MBps) [2024-11-20T08:42:06.096Z] Copying: 397/512 [MB] (198 MBps) [2024-11-20T08:42:07.032Z] Copying: 512/512 [MB] (average 198 MBps) 00:08:36.117 00:08:36.117 08:42:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:36.117 08:42:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:36.117 08:42:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:36.117 08:42:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:36.117 [2024-11-20 08:42:06.862426] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:36.117 [2024-11-20 08:42:06.862541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60940 ] 00:08:36.117 { 00:08:36.117 "subsystems": [ 00:08:36.117 { 00:08:36.117 "subsystem": "bdev", 00:08:36.117 "config": [ 00:08:36.117 { 00:08:36.117 "params": { 00:08:36.117 "block_size": 512, 00:08:36.117 "num_blocks": 1048576, 00:08:36.117 "name": "malloc0" 00:08:36.117 }, 00:08:36.117 "method": "bdev_malloc_create" 00:08:36.117 }, 00:08:36.117 { 00:08:36.117 "params": { 00:08:36.117 "block_size": 512, 00:08:36.117 "num_blocks": 1048576, 00:08:36.117 "name": "malloc1" 00:08:36.117 }, 00:08:36.117 "method": "bdev_malloc_create" 00:08:36.117 }, 00:08:36.117 { 00:08:36.117 "method": "bdev_wait_for_examine" 00:08:36.117 } 00:08:36.117 ] 00:08:36.117 } 00:08:36.117 ] 00:08:36.117 } 00:08:36.117 [2024-11-20 08:42:07.014063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.375 [2024-11-20 08:42:07.099903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.375 [2024-11-20 08:42:07.174003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.750  [2024-11-20T08:42:10.040Z] Copying: 196/512 [MB] (196 MBps) [2024-11-20T08:42:10.298Z] Copying: 393/512 [MB] (196 MBps) [2024-11-20T08:42:11.232Z] Copying: 512/512 [MB] (average 196 MBps) 00:08:40.317 00:08:40.317 00:08:40.317 real 0m8.364s 00:08:40.317 user 0m7.141s 00:08:40.317 sys 0m1.056s 00:08:40.317 08:42:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.317 ************************************ 00:08:40.317 END TEST dd_malloc_copy 00:08:40.317 ************************************ 00:08:40.317 08:42:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:40.317 00:08:40.317 real 0m8.604s 00:08:40.317 user 0m7.274s 00:08:40.317 sys 0m1.167s 00:08:40.317 08:42:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.317 08:42:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:40.317 ************************************ 00:08:40.317 END TEST spdk_dd_malloc 00:08:40.317 ************************************ 00:08:40.317 08:42:11 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:40.317 08:42:11 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:40.317 08:42:11 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.317 08:42:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:40.317 ************************************ 00:08:40.317 START TEST spdk_dd_bdev_to_bdev 00:08:40.317 ************************************ 00:08:40.317 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:40.317 * Looking for test storage... 00:08:40.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:40.317 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.317 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.317 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.576 --rc genhtml_branch_coverage=1 00:08:40.576 --rc genhtml_function_coverage=1 00:08:40.576 --rc genhtml_legend=1 00:08:40.576 --rc geninfo_all_blocks=1 00:08:40.576 --rc geninfo_unexecuted_blocks=1 00:08:40.576 00:08:40.576 ' 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.576 --rc genhtml_branch_coverage=1 00:08:40.576 --rc genhtml_function_coverage=1 00:08:40.576 --rc genhtml_legend=1 00:08:40.576 --rc geninfo_all_blocks=1 00:08:40.576 --rc geninfo_unexecuted_blocks=1 00:08:40.576 00:08:40.576 ' 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.576 --rc genhtml_branch_coverage=1 00:08:40.576 --rc genhtml_function_coverage=1 00:08:40.576 --rc genhtml_legend=1 00:08:40.576 --rc geninfo_all_blocks=1 00:08:40.576 --rc geninfo_unexecuted_blocks=1 00:08:40.576 00:08:40.576 ' 00:08:40.576 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.576 --rc genhtml_branch_coverage=1 00:08:40.576 --rc genhtml_function_coverage=1 00:08:40.576 --rc genhtml_legend=1 00:08:40.576 --rc geninfo_all_blocks=1 00:08:40.576 --rc geninfo_unexecuted_blocks=1 00:08:40.576 00:08:40.577 ' 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:40.577 ************************************ 00:08:40.577 START TEST dd_inflate_file 00:08:40.577 ************************************ 00:08:40.577 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:40.577 [2024-11-20 08:42:11.348239] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:40.577 [2024-11-20 08:42:11.348365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61063 ] 00:08:40.836 [2024-11-20 08:42:11.499088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.836 [2024-11-20 08:42:11.579624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.836 [2024-11-20 08:42:11.650884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.836  [2024-11-20T08:42:12.009Z] Copying: 64/64 [MB] (average 1391 MBps) 00:08:41.094 00:08:41.094 00:08:41.094 real 0m0.695s 00:08:41.094 user 0m0.420s 00:08:41.094 sys 0m0.365s 00:08:41.094 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.094 ************************************ 00:08:41.094 END TEST dd_inflate_file 00:08:41.094 ************************************ 00:08:41.094 08:42:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:41.354 08:42:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:41.354 08:42:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:41.354 08:42:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:41.354 08:42:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:41.354 08:42:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.354 08:42:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:41.354 08:42:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:41.354 08:42:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:41.354 08:42:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:41.354 ************************************ 00:08:41.354 START TEST dd_copy_to_out_bdev 00:08:41.354 ************************************ 00:08:41.354 08:42:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:41.354 { 00:08:41.354 "subsystems": [ 00:08:41.354 { 00:08:41.354 "subsystem": "bdev", 00:08:41.354 "config": [ 00:08:41.354 { 00:08:41.354 "params": { 00:08:41.354 "trtype": "pcie", 00:08:41.354 "traddr": "0000:00:10.0", 00:08:41.354 "name": "Nvme0" 00:08:41.354 }, 00:08:41.354 "method": "bdev_nvme_attach_controller" 00:08:41.354 }, 00:08:41.354 { 00:08:41.354 "params": { 00:08:41.354 "trtype": "pcie", 00:08:41.354 "traddr": "0000:00:11.0", 00:08:41.354 "name": "Nvme1" 00:08:41.354 }, 00:08:41.354 "method": "bdev_nvme_attach_controller" 00:08:41.354 }, 00:08:41.354 { 00:08:41.354 "method": "bdev_wait_for_examine" 00:08:41.354 } 00:08:41.354 ] 00:08:41.354 } 00:08:41.354 ] 00:08:41.354 } 00:08:41.354 [2024-11-20 08:42:12.101729] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:41.354 [2024-11-20 08:42:12.101865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61097 ] 00:08:41.354 [2024-11-20 08:42:12.245584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.613 [2024-11-20 08:42:12.325306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.613 [2024-11-20 08:42:12.396980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.989  [2024-11-20T08:42:13.904Z] Copying: 63/64 [MB] (63 MBps) [2024-11-20T08:42:13.904Z] Copying: 64/64 [MB] (average 63 MBps) 00:08:42.989 00:08:42.989 00:08:42.989 real 0m1.841s 00:08:42.989 user 0m1.574s 00:08:42.989 sys 0m1.434s 00:08:42.989 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.989 ************************************ 00:08:42.989 END TEST dd_copy_to_out_bdev 00:08:42.989 ************************************ 00:08:42.989 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:43.248 ************************************ 00:08:43.248 START TEST dd_offset_magic 00:08:43.248 ************************************ 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:43.248 08:42:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:43.248 [2024-11-20 08:42:14.000188] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:43.248 [2024-11-20 08:42:14.000272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61142 ] 00:08:43.248 { 00:08:43.248 "subsystems": [ 00:08:43.248 { 00:08:43.248 "subsystem": "bdev", 00:08:43.248 "config": [ 00:08:43.248 { 00:08:43.248 "params": { 00:08:43.248 "trtype": "pcie", 00:08:43.248 "traddr": "0000:00:10.0", 00:08:43.248 "name": "Nvme0" 00:08:43.248 }, 00:08:43.248 "method": "bdev_nvme_attach_controller" 00:08:43.248 }, 00:08:43.248 { 00:08:43.248 "params": { 00:08:43.248 "trtype": "pcie", 00:08:43.248 "traddr": "0000:00:11.0", 00:08:43.248 "name": "Nvme1" 00:08:43.248 }, 00:08:43.248 "method": "bdev_nvme_attach_controller" 00:08:43.248 }, 00:08:43.248 { 00:08:43.248 "method": "bdev_wait_for_examine" 00:08:43.248 } 00:08:43.248 ] 00:08:43.248 } 00:08:43.248 ] 00:08:43.248 } 00:08:43.248 [2024-11-20 08:42:14.146516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.508 [2024-11-20 08:42:14.225888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.508 [2024-11-20 08:42:14.299329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.767  [2024-11-20T08:42:14.940Z] Copying: 65/65 [MB] (average 1101 MBps) 00:08:44.025 00:08:44.025 08:42:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:44.025 08:42:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:44.025 08:42:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:44.025 08:42:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:44.025 [2024-11-20 08:42:14.911437] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:44.025 [2024-11-20 08:42:14.911558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61162 ] 00:08:44.025 { 00:08:44.025 "subsystems": [ 00:08:44.025 { 00:08:44.025 "subsystem": "bdev", 00:08:44.025 "config": [ 00:08:44.025 { 00:08:44.025 "params": { 00:08:44.025 "trtype": "pcie", 00:08:44.025 "traddr": "0000:00:10.0", 00:08:44.025 "name": "Nvme0" 00:08:44.025 }, 00:08:44.025 "method": "bdev_nvme_attach_controller" 00:08:44.025 }, 00:08:44.025 { 00:08:44.025 "params": { 00:08:44.025 "trtype": "pcie", 00:08:44.025 "traddr": "0000:00:11.0", 00:08:44.025 "name": "Nvme1" 00:08:44.025 }, 00:08:44.025 "method": "bdev_nvme_attach_controller" 00:08:44.025 }, 00:08:44.025 { 00:08:44.025 "method": "bdev_wait_for_examine" 00:08:44.025 } 00:08:44.025 ] 00:08:44.025 } 00:08:44.025 ] 00:08:44.025 } 00:08:44.284 [2024-11-20 08:42:15.059553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.284 [2024-11-20 08:42:15.158721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.542 [2024-11-20 08:42:15.232775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.542  [2024-11-20T08:42:15.716Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:44.801 00:08:44.801 08:42:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:44.801 08:42:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:44.801 08:42:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:44.801 08:42:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:44.801 08:42:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:44.801 08:42:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:44.801 08:42:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:45.059 [2024-11-20 08:42:15.729120] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:45.059 [2024-11-20 08:42:15.729233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61179 ] 00:08:45.059 { 00:08:45.059 "subsystems": [ 00:08:45.059 { 00:08:45.059 "subsystem": "bdev", 00:08:45.059 "config": [ 00:08:45.059 { 00:08:45.059 "params": { 00:08:45.059 "trtype": "pcie", 00:08:45.059 "traddr": "0000:00:10.0", 00:08:45.059 "name": "Nvme0" 00:08:45.059 }, 00:08:45.059 "method": "bdev_nvme_attach_controller" 00:08:45.059 }, 00:08:45.059 { 00:08:45.059 "params": { 00:08:45.059 "trtype": "pcie", 00:08:45.059 "traddr": "0000:00:11.0", 00:08:45.059 "name": "Nvme1" 00:08:45.059 }, 00:08:45.059 "method": "bdev_nvme_attach_controller" 00:08:45.059 }, 00:08:45.059 { 00:08:45.059 "method": "bdev_wait_for_examine" 00:08:45.059 } 00:08:45.059 ] 00:08:45.059 } 00:08:45.059 ] 00:08:45.059 } 00:08:45.059 [2024-11-20 08:42:15.879656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.059 [2024-11-20 08:42:15.964257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.317 [2024-11-20 08:42:16.037435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.576  [2024-11-20T08:42:16.749Z] Copying: 65/65 [MB] (average 1203 MBps) 00:08:45.834 00:08:45.834 08:42:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:45.834 08:42:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:45.834 08:42:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:45.834 08:42:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:45.834 [2024-11-20 08:42:16.658928] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:45.834 [2024-11-20 08:42:16.659078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61198 ] 00:08:45.834 { 00:08:45.835 "subsystems": [ 00:08:45.835 { 00:08:45.835 "subsystem": "bdev", 00:08:45.835 "config": [ 00:08:45.835 { 00:08:45.835 "params": { 00:08:45.835 "trtype": "pcie", 00:08:45.835 "traddr": "0000:00:10.0", 00:08:45.835 "name": "Nvme0" 00:08:45.835 }, 00:08:45.835 "method": "bdev_nvme_attach_controller" 00:08:45.835 }, 00:08:45.835 { 00:08:45.835 "params": { 00:08:45.835 "trtype": "pcie", 00:08:45.835 "traddr": "0000:00:11.0", 00:08:45.835 "name": "Nvme1" 00:08:45.835 }, 00:08:45.835 "method": "bdev_nvme_attach_controller" 00:08:45.835 }, 00:08:45.835 { 00:08:45.835 "method": "bdev_wait_for_examine" 00:08:45.835 } 00:08:45.835 ] 00:08:45.835 } 00:08:45.835 ] 00:08:45.835 } 00:08:46.093 [2024-11-20 08:42:16.807160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.093 [2024-11-20 08:42:16.884961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.093 [2024-11-20 08:42:16.956712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.351  [2024-11-20T08:42:17.525Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:46.610 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:46.610 00:08:46.610 real 0m3.455s 00:08:46.610 user 0m2.537s 00:08:46.610 sys 0m1.103s 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:46.610 ************************************ 00:08:46.610 END TEST dd_offset_magic 00:08:46.610 ************************************ 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:46.610 08:42:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:46.610 [2024-11-20 08:42:17.498098] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:46.610 [2024-11-20 08:42:17.498758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61230 ] 00:08:46.610 { 00:08:46.610 "subsystems": [ 00:08:46.610 { 00:08:46.610 "subsystem": "bdev", 00:08:46.610 "config": [ 00:08:46.610 { 00:08:46.610 "params": { 00:08:46.610 "trtype": "pcie", 00:08:46.610 "traddr": "0000:00:10.0", 00:08:46.610 "name": "Nvme0" 00:08:46.610 }, 00:08:46.610 "method": "bdev_nvme_attach_controller" 00:08:46.610 }, 00:08:46.610 { 00:08:46.610 "params": { 00:08:46.610 "trtype": "pcie", 00:08:46.610 "traddr": "0000:00:11.0", 00:08:46.610 "name": "Nvme1" 00:08:46.610 }, 00:08:46.610 "method": "bdev_nvme_attach_controller" 00:08:46.610 }, 00:08:46.610 { 00:08:46.610 "method": "bdev_wait_for_examine" 00:08:46.610 } 00:08:46.610 ] 00:08:46.610 } 00:08:46.610 ] 00:08:46.610 } 00:08:46.869 [2024-11-20 08:42:17.642228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.869 [2024-11-20 08:42:17.720482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.127 [2024-11-20 08:42:17.792226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.127  [2024-11-20T08:42:18.301Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:47.386 00:08:47.386 08:42:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:47.386 08:42:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:47.386 08:42:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:47.386 08:42:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:47.386 08:42:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:47.386 08:42:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:47.386 08:42:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:47.386 08:42:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:47.386 08:42:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:47.386 08:42:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:47.386 { 00:08:47.386 "subsystems": [ 00:08:47.386 { 00:08:47.386 "subsystem": "bdev", 00:08:47.386 "config": [ 00:08:47.386 { 00:08:47.386 "params": { 00:08:47.386 "trtype": "pcie", 00:08:47.386 "traddr": "0000:00:10.0", 00:08:47.386 "name": "Nvme0" 00:08:47.386 }, 00:08:47.386 "method": "bdev_nvme_attach_controller" 00:08:47.386 }, 00:08:47.386 { 00:08:47.386 "params": { 00:08:47.386 "trtype": "pcie", 00:08:47.386 "traddr": "0000:00:11.0", 00:08:47.386 "name": "Nvme1" 00:08:47.386 }, 00:08:47.386 "method": "bdev_nvme_attach_controller" 00:08:47.386 }, 00:08:47.386 { 00:08:47.386 "method": "bdev_wait_for_examine" 00:08:47.386 } 00:08:47.386 ] 00:08:47.386 } 00:08:47.386 ] 00:08:47.386 } 00:08:47.646 [2024-11-20 08:42:18.302339] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:47.646 [2024-11-20 08:42:18.302517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61251 ] 00:08:47.646 [2024-11-20 08:42:18.454586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.646 [2024-11-20 08:42:18.532512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.905 [2024-11-20 08:42:18.604190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.905  [2024-11-20T08:42:19.078Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:48.163 00:08:48.423 08:42:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:48.423 00:08:48.423 real 0m8.009s 00:08:48.423 user 0m5.854s 00:08:48.423 sys 0m3.756s 00:08:48.423 08:42:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.423 ************************************ 00:08:48.423 END TEST spdk_dd_bdev_to_bdev 00:08:48.423 ************************************ 00:08:48.423 08:42:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:48.423 08:42:19 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:48.423 08:42:19 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:48.423 08:42:19 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.423 08:42:19 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.423 08:42:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:48.423 ************************************ 00:08:48.423 START TEST spdk_dd_uring 00:08:48.423 ************************************ 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:48.423 * Looking for test storage... 00:08:48.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:48.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.423 --rc genhtml_branch_coverage=1 00:08:48.423 --rc genhtml_function_coverage=1 00:08:48.423 --rc genhtml_legend=1 00:08:48.423 --rc geninfo_all_blocks=1 00:08:48.423 --rc geninfo_unexecuted_blocks=1 00:08:48.423 00:08:48.423 ' 00:08:48.423 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:48.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.424 --rc genhtml_branch_coverage=1 00:08:48.424 --rc genhtml_function_coverage=1 00:08:48.424 --rc genhtml_legend=1 00:08:48.424 --rc geninfo_all_blocks=1 00:08:48.424 --rc geninfo_unexecuted_blocks=1 00:08:48.424 00:08:48.424 ' 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:48.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.424 --rc genhtml_branch_coverage=1 00:08:48.424 --rc genhtml_function_coverage=1 00:08:48.424 --rc genhtml_legend=1 00:08:48.424 --rc geninfo_all_blocks=1 00:08:48.424 --rc geninfo_unexecuted_blocks=1 00:08:48.424 00:08:48.424 ' 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:48.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.424 --rc genhtml_branch_coverage=1 00:08:48.424 --rc genhtml_function_coverage=1 00:08:48.424 --rc genhtml_legend=1 00:08:48.424 --rc geninfo_all_blocks=1 00:08:48.424 --rc geninfo_unexecuted_blocks=1 00:08:48.424 00:08:48.424 ' 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.424 08:42:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:48.683 ************************************ 00:08:48.683 START TEST dd_uring_copy 00:08:48.683 ************************************ 00:08:48.683 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=mnd8h7b63v6m5td2qp5mv4e20k8olemloh7adbp1u7onqr4btxk0lzjbpygwffc54h5ayt7z76ue5oi8czxdrvzt6s0z3v2wne4e66ubi2ljmczlfwrbo5rdnk6028960kq680237tasb6owdfpe28guka0rofm4yhh6d99ab1z3woszs4zh9h6eoku4joiha2dytfofme67np6mwnl6jjb0myv1sddew2f067bdvgkek6p1fpq69siah283g3vbnj8mbq1m16dqm2thbmvivip3ru3dk0m6c8xkyk7ag7q6s30pnoy74hdxffyaondvzskypy3xty58p8fmpsol96g9ysziztgv553tqyb7kmth1c3442khwdg2yw2smzrdwq14c9168gho6ebyc676mowmair3jul8dmwu9t75hygvsdponvr10vbjs2agg3skmqov463b2wrbsxccrmviyvknnavfio5pwnv35tisu74krc166pqtvtff9s61wk8d06c1gex9dca7hhoj7za0dsbcoxwuozryneunt99nlo22u1z2tqr3eojd6ozli2vbl5ab3pmpy55tv1opdslaz5371lba95l9vacyo9j6pf50243i8jin26l26su69g75148dy89roopouz0wlc44gfncikq4wb5out91f372oskur4ef0iu0i3uetjkgbxru5uct5aov10d5m7oj5fgce8qwndwj37cf9x5k5vktgo8frvedmzl4wche7tbzsjehdfaqgmpva04nbu05mw40qav22nxaia3isvazz46qfo6p2bep6brirq26grk2r2h4p6wu3c03jj7zqxjrmrykesek8k3jfi7tt80yzp6ae32zq2namuw8lz5znqg41cacqh9loiq7f6ymo20edijixpbai615buar09b39xj1i05t6fmi0n8u9pt324t0o9nqy5cf80e1q5mr99ky9o9grz51rr8emvdwmgg5w1706o8h95121qe99zhw0g0ydbcb 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo mnd8h7b63v6m5td2qp5mv4e20k8olemloh7adbp1u7onqr4btxk0lzjbpygwffc54h5ayt7z76ue5oi8czxdrvzt6s0z3v2wne4e66ubi2ljmczlfwrbo5rdnk6028960kq680237tasb6owdfpe28guka0rofm4yhh6d99ab1z3woszs4zh9h6eoku4joiha2dytfofme67np6mwnl6jjb0myv1sddew2f067bdvgkek6p1fpq69siah283g3vbnj8mbq1m16dqm2thbmvivip3ru3dk0m6c8xkyk7ag7q6s30pnoy74hdxffyaondvzskypy3xty58p8fmpsol96g9ysziztgv553tqyb7kmth1c3442khwdg2yw2smzrdwq14c9168gho6ebyc676mowmair3jul8dmwu9t75hygvsdponvr10vbjs2agg3skmqov463b2wrbsxccrmviyvknnavfio5pwnv35tisu74krc166pqtvtff9s61wk8d06c1gex9dca7hhoj7za0dsbcoxwuozryneunt99nlo22u1z2tqr3eojd6ozli2vbl5ab3pmpy55tv1opdslaz5371lba95l9vacyo9j6pf50243i8jin26l26su69g75148dy89roopouz0wlc44gfncikq4wb5out91f372oskur4ef0iu0i3uetjkgbxru5uct5aov10d5m7oj5fgce8qwndwj37cf9x5k5vktgo8frvedmzl4wche7tbzsjehdfaqgmpva04nbu05mw40qav22nxaia3isvazz46qfo6p2bep6brirq26grk2r2h4p6wu3c03jj7zqxjrmrykesek8k3jfi7tt80yzp6ae32zq2namuw8lz5znqg41cacqh9loiq7f6ymo20edijixpbai615buar09b39xj1i05t6fmi0n8u9pt324t0o9nqy5cf80e1q5mr99ky9o9grz51rr8emvdwmgg5w1706o8h95121qe99zhw0g0ydbcb 00:08:48.684 08:42:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:48.684 [2024-11-20 08:42:19.438582] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:48.684 [2024-11-20 08:42:19.438710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61329 ] 00:08:48.684 [2024-11-20 08:42:19.591542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.943 [2024-11-20 08:42:19.659393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.943 [2024-11-20 08:42:19.716005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.509  [2024-11-20T08:42:20.991Z] Copying: 511/511 [MB] (average 1319 MBps) 00:08:50.076 00:08:50.076 08:42:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:50.076 08:42:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:50.076 08:42:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:50.076 08:42:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:50.076 { 00:08:50.076 "subsystems": [ 00:08:50.076 { 00:08:50.076 "subsystem": "bdev", 00:08:50.076 "config": [ 00:08:50.076 { 00:08:50.076 "params": { 00:08:50.076 "block_size": 512, 00:08:50.076 "num_blocks": 1048576, 00:08:50.076 "name": "malloc0" 00:08:50.076 }, 00:08:50.076 "method": "bdev_malloc_create" 00:08:50.076 }, 00:08:50.076 { 00:08:50.076 "params": { 00:08:50.076 "filename": "/dev/zram1", 00:08:50.076 "name": "uring0" 00:08:50.076 }, 00:08:50.076 "method": "bdev_uring_create" 00:08:50.076 }, 00:08:50.076 { 00:08:50.076 "method": "bdev_wait_for_examine" 00:08:50.076 } 00:08:50.076 ] 00:08:50.076 } 00:08:50.076 ] 00:08:50.076 } 00:08:50.076 [2024-11-20 08:42:20.763881] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:50.076 [2024-11-20 08:42:20.764464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61349 ] 00:08:50.076 [2024-11-20 08:42:20.915367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.439 [2024-11-20 08:42:21.003190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.439 [2024-11-20 08:42:21.083289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.811  [2024-11-20T08:42:23.662Z] Copying: 225/512 [MB] (225 MBps) [2024-11-20T08:42:23.662Z] Copying: 452/512 [MB] (226 MBps) [2024-11-20T08:42:24.229Z] Copying: 512/512 [MB] (average 226 MBps) 00:08:53.314 00:08:53.314 08:42:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:53.314 08:42:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:53.314 08:42:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:53.314 08:42:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:53.314 { 00:08:53.314 "subsystems": [ 00:08:53.314 { 00:08:53.314 "subsystem": "bdev", 00:08:53.314 "config": [ 00:08:53.314 { 00:08:53.314 "params": { 00:08:53.314 "block_size": 512, 00:08:53.314 "num_blocks": 1048576, 00:08:53.314 "name": "malloc0" 00:08:53.314 }, 00:08:53.314 "method": "bdev_malloc_create" 00:08:53.314 }, 00:08:53.314 { 00:08:53.314 "params": { 00:08:53.314 "filename": "/dev/zram1", 00:08:53.314 "name": "uring0" 00:08:53.314 }, 00:08:53.314 "method": "bdev_uring_create" 00:08:53.314 }, 00:08:53.314 { 00:08:53.314 "method": "bdev_wait_for_examine" 00:08:53.314 } 00:08:53.314 ] 00:08:53.314 } 00:08:53.314 ] 00:08:53.314 } 00:08:53.314 [2024-11-20 08:42:24.184061] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:53.314 [2024-11-20 08:42:24.184173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61401 ] 00:08:53.573 [2024-11-20 08:42:24.334208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.573 [2024-11-20 08:42:24.415897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.832 [2024-11-20 08:42:24.493727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.229  [2024-11-20T08:42:27.078Z] Copying: 154/512 [MB] (154 MBps) [2024-11-20T08:42:28.013Z] Copying: 309/512 [MB] (155 MBps) [2024-11-20T08:42:28.013Z] Copying: 478/512 [MB] (169 MBps) [2024-11-20T08:42:28.580Z] Copying: 512/512 [MB] (average 161 MBps) 00:08:57.665 00:08:57.665 08:42:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:57.665 08:42:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ mnd8h7b63v6m5td2qp5mv4e20k8olemloh7adbp1u7onqr4btxk0lzjbpygwffc54h5ayt7z76ue5oi8czxdrvzt6s0z3v2wne4e66ubi2ljmczlfwrbo5rdnk6028960kq680237tasb6owdfpe28guka0rofm4yhh6d99ab1z3woszs4zh9h6eoku4joiha2dytfofme67np6mwnl6jjb0myv1sddew2f067bdvgkek6p1fpq69siah283g3vbnj8mbq1m16dqm2thbmvivip3ru3dk0m6c8xkyk7ag7q6s30pnoy74hdxffyaondvzskypy3xty58p8fmpsol96g9ysziztgv553tqyb7kmth1c3442khwdg2yw2smzrdwq14c9168gho6ebyc676mowmair3jul8dmwu9t75hygvsdponvr10vbjs2agg3skmqov463b2wrbsxccrmviyvknnavfio5pwnv35tisu74krc166pqtvtff9s61wk8d06c1gex9dca7hhoj7za0dsbcoxwuozryneunt99nlo22u1z2tqr3eojd6ozli2vbl5ab3pmpy55tv1opdslaz5371lba95l9vacyo9j6pf50243i8jin26l26su69g75148dy89roopouz0wlc44gfncikq4wb5out91f372oskur4ef0iu0i3uetjkgbxru5uct5aov10d5m7oj5fgce8qwndwj37cf9x5k5vktgo8frvedmzl4wche7tbzsjehdfaqgmpva04nbu05mw40qav22nxaia3isvazz46qfo6p2bep6brirq26grk2r2h4p6wu3c03jj7zqxjrmrykesek8k3jfi7tt80yzp6ae32zq2namuw8lz5znqg41cacqh9loiq7f6ymo20edijixpbai615buar09b39xj1i05t6fmi0n8u9pt324t0o9nqy5cf80e1q5mr99ky9o9grz51rr8emvdwmgg5w1706o8h95121qe99zhw0g0ydbcb == \m\n\d\8\h\7\b\6\3\v\6\m\5\t\d\2\q\p\5\m\v\4\e\2\0\k\8\o\l\e\m\l\o\h\7\a\d\b\p\1\u\7\o\n\q\r\4\b\t\x\k\0\l\z\j\b\p\y\g\w\f\f\c\5\4\h\5\a\y\t\7\z\7\6\u\e\5\o\i\8\c\z\x\d\r\v\z\t\6\s\0\z\3\v\2\w\n\e\4\e\6\6\u\b\i\2\l\j\m\c\z\l\f\w\r\b\o\5\r\d\n\k\6\0\2\8\9\6\0\k\q\6\8\0\2\3\7\t\a\s\b\6\o\w\d\f\p\e\2\8\g\u\k\a\0\r\o\f\m\4\y\h\h\6\d\9\9\a\b\1\z\3\w\o\s\z\s\4\z\h\9\h\6\e\o\k\u\4\j\o\i\h\a\2\d\y\t\f\o\f\m\e\6\7\n\p\6\m\w\n\l\6\j\j\b\0\m\y\v\1\s\d\d\e\w\2\f\0\6\7\b\d\v\g\k\e\k\6\p\1\f\p\q\6\9\s\i\a\h\2\8\3\g\3\v\b\n\j\8\m\b\q\1\m\1\6\d\q\m\2\t\h\b\m\v\i\v\i\p\3\r\u\3\d\k\0\m\6\c\8\x\k\y\k\7\a\g\7\q\6\s\3\0\p\n\o\y\7\4\h\d\x\f\f\y\a\o\n\d\v\z\s\k\y\p\y\3\x\t\y\5\8\p\8\f\m\p\s\o\l\9\6\g\9\y\s\z\i\z\t\g\v\5\5\3\t\q\y\b\7\k\m\t\h\1\c\3\4\4\2\k\h\w\d\g\2\y\w\2\s\m\z\r\d\w\q\1\4\c\9\1\6\8\g\h\o\6\e\b\y\c\6\7\6\m\o\w\m\a\i\r\3\j\u\l\8\d\m\w\u\9\t\7\5\h\y\g\v\s\d\p\o\n\v\r\1\0\v\b\j\s\2\a\g\g\3\s\k\m\q\o\v\4\6\3\b\2\w\r\b\s\x\c\c\r\m\v\i\y\v\k\n\n\a\v\f\i\o\5\p\w\n\v\3\5\t\i\s\u\7\4\k\r\c\1\6\6\p\q\t\v\t\f\f\9\s\6\1\w\k\8\d\0\6\c\1\g\e\x\9\d\c\a\7\h\h\o\j\7\z\a\0\d\s\b\c\o\x\w\u\o\z\r\y\n\e\u\n\t\9\9\n\l\o\2\2\u\1\z\2\t\q\r\3\e\o\j\d\6\o\z\l\i\2\v\b\l\5\a\b\3\p\m\p\y\5\5\t\v\1\o\p\d\s\l\a\z\5\3\7\1\l\b\a\9\5\l\9\v\a\c\y\o\9\j\6\p\f\5\0\2\4\3\i\8\j\i\n\2\6\l\2\6\s\u\6\9\g\7\5\1\4\8\d\y\8\9\r\o\o\p\o\u\z\0\w\l\c\4\4\g\f\n\c\i\k\q\4\w\b\5\o\u\t\9\1\f\3\7\2\o\s\k\u\r\4\e\f\0\i\u\0\i\3\u\e\t\j\k\g\b\x\r\u\5\u\c\t\5\a\o\v\1\0\d\5\m\7\o\j\5\f\g\c\e\8\q\w\n\d\w\j\3\7\c\f\9\x\5\k\5\v\k\t\g\o\8\f\r\v\e\d\m\z\l\4\w\c\h\e\7\t\b\z\s\j\e\h\d\f\a\q\g\m\p\v\a\0\4\n\b\u\0\5\m\w\4\0\q\a\v\2\2\n\x\a\i\a\3\i\s\v\a\z\z\4\6\q\f\o\6\p\2\b\e\p\6\b\r\i\r\q\2\6\g\r\k\2\r\2\h\4\p\6\w\u\3\c\0\3\j\j\7\z\q\x\j\r\m\r\y\k\e\s\e\k\8\k\3\j\f\i\7\t\t\8\0\y\z\p\6\a\e\3\2\z\q\2\n\a\m\u\w\8\l\z\5\z\n\q\g\4\1\c\a\c\q\h\9\l\o\i\q\7\f\6\y\m\o\2\0\e\d\i\j\i\x\p\b\a\i\6\1\5\b\u\a\r\0\9\b\3\9\x\j\1\i\0\5\t\6\f\m\i\0\n\8\u\9\p\t\3\2\4\t\0\o\9\n\q\y\5\c\f\8\0\e\1\q\5\m\r\9\9\k\y\9\o\9\g\r\z\5\1\r\r\8\e\m\v\d\w\m\g\g\5\w\1\7\0\6\o\8\h\9\5\1\2\1\q\e\9\9\z\h\w\0\g\0\y\d\b\c\b ]] 00:08:57.665 08:42:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:57.665 08:42:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ mnd8h7b63v6m5td2qp5mv4e20k8olemloh7adbp1u7onqr4btxk0lzjbpygwffc54h5ayt7z76ue5oi8czxdrvzt6s0z3v2wne4e66ubi2ljmczlfwrbo5rdnk6028960kq680237tasb6owdfpe28guka0rofm4yhh6d99ab1z3woszs4zh9h6eoku4joiha2dytfofme67np6mwnl6jjb0myv1sddew2f067bdvgkek6p1fpq69siah283g3vbnj8mbq1m16dqm2thbmvivip3ru3dk0m6c8xkyk7ag7q6s30pnoy74hdxffyaondvzskypy3xty58p8fmpsol96g9ysziztgv553tqyb7kmth1c3442khwdg2yw2smzrdwq14c9168gho6ebyc676mowmair3jul8dmwu9t75hygvsdponvr10vbjs2agg3skmqov463b2wrbsxccrmviyvknnavfio5pwnv35tisu74krc166pqtvtff9s61wk8d06c1gex9dca7hhoj7za0dsbcoxwuozryneunt99nlo22u1z2tqr3eojd6ozli2vbl5ab3pmpy55tv1opdslaz5371lba95l9vacyo9j6pf50243i8jin26l26su69g75148dy89roopouz0wlc44gfncikq4wb5out91f372oskur4ef0iu0i3uetjkgbxru5uct5aov10d5m7oj5fgce8qwndwj37cf9x5k5vktgo8frvedmzl4wche7tbzsjehdfaqgmpva04nbu05mw40qav22nxaia3isvazz46qfo6p2bep6brirq26grk2r2h4p6wu3c03jj7zqxjrmrykesek8k3jfi7tt80yzp6ae32zq2namuw8lz5znqg41cacqh9loiq7f6ymo20edijixpbai615buar09b39xj1i05t6fmi0n8u9pt324t0o9nqy5cf80e1q5mr99ky9o9grz51rr8emvdwmgg5w1706o8h95121qe99zhw0g0ydbcb == \m\n\d\8\h\7\b\6\3\v\6\m\5\t\d\2\q\p\5\m\v\4\e\2\0\k\8\o\l\e\m\l\o\h\7\a\d\b\p\1\u\7\o\n\q\r\4\b\t\x\k\0\l\z\j\b\p\y\g\w\f\f\c\5\4\h\5\a\y\t\7\z\7\6\u\e\5\o\i\8\c\z\x\d\r\v\z\t\6\s\0\z\3\v\2\w\n\e\4\e\6\6\u\b\i\2\l\j\m\c\z\l\f\w\r\b\o\5\r\d\n\k\6\0\2\8\9\6\0\k\q\6\8\0\2\3\7\t\a\s\b\6\o\w\d\f\p\e\2\8\g\u\k\a\0\r\o\f\m\4\y\h\h\6\d\9\9\a\b\1\z\3\w\o\s\z\s\4\z\h\9\h\6\e\o\k\u\4\j\o\i\h\a\2\d\y\t\f\o\f\m\e\6\7\n\p\6\m\w\n\l\6\j\j\b\0\m\y\v\1\s\d\d\e\w\2\f\0\6\7\b\d\v\g\k\e\k\6\p\1\f\p\q\6\9\s\i\a\h\2\8\3\g\3\v\b\n\j\8\m\b\q\1\m\1\6\d\q\m\2\t\h\b\m\v\i\v\i\p\3\r\u\3\d\k\0\m\6\c\8\x\k\y\k\7\a\g\7\q\6\s\3\0\p\n\o\y\7\4\h\d\x\f\f\y\a\o\n\d\v\z\s\k\y\p\y\3\x\t\y\5\8\p\8\f\m\p\s\o\l\9\6\g\9\y\s\z\i\z\t\g\v\5\5\3\t\q\y\b\7\k\m\t\h\1\c\3\4\4\2\k\h\w\d\g\2\y\w\2\s\m\z\r\d\w\q\1\4\c\9\1\6\8\g\h\o\6\e\b\y\c\6\7\6\m\o\w\m\a\i\r\3\j\u\l\8\d\m\w\u\9\t\7\5\h\y\g\v\s\d\p\o\n\v\r\1\0\v\b\j\s\2\a\g\g\3\s\k\m\q\o\v\4\6\3\b\2\w\r\b\s\x\c\c\r\m\v\i\y\v\k\n\n\a\v\f\i\o\5\p\w\n\v\3\5\t\i\s\u\7\4\k\r\c\1\6\6\p\q\t\v\t\f\f\9\s\6\1\w\k\8\d\0\6\c\1\g\e\x\9\d\c\a\7\h\h\o\j\7\z\a\0\d\s\b\c\o\x\w\u\o\z\r\y\n\e\u\n\t\9\9\n\l\o\2\2\u\1\z\2\t\q\r\3\e\o\j\d\6\o\z\l\i\2\v\b\l\5\a\b\3\p\m\p\y\5\5\t\v\1\o\p\d\s\l\a\z\5\3\7\1\l\b\a\9\5\l\9\v\a\c\y\o\9\j\6\p\f\5\0\2\4\3\i\8\j\i\n\2\6\l\2\6\s\u\6\9\g\7\5\1\4\8\d\y\8\9\r\o\o\p\o\u\z\0\w\l\c\4\4\g\f\n\c\i\k\q\4\w\b\5\o\u\t\9\1\f\3\7\2\o\s\k\u\r\4\e\f\0\i\u\0\i\3\u\e\t\j\k\g\b\x\r\u\5\u\c\t\5\a\o\v\1\0\d\5\m\7\o\j\5\f\g\c\e\8\q\w\n\d\w\j\3\7\c\f\9\x\5\k\5\v\k\t\g\o\8\f\r\v\e\d\m\z\l\4\w\c\h\e\7\t\b\z\s\j\e\h\d\f\a\q\g\m\p\v\a\0\4\n\b\u\0\5\m\w\4\0\q\a\v\2\2\n\x\a\i\a\3\i\s\v\a\z\z\4\6\q\f\o\6\p\2\b\e\p\6\b\r\i\r\q\2\6\g\r\k\2\r\2\h\4\p\6\w\u\3\c\0\3\j\j\7\z\q\x\j\r\m\r\y\k\e\s\e\k\8\k\3\j\f\i\7\t\t\8\0\y\z\p\6\a\e\3\2\z\q\2\n\a\m\u\w\8\l\z\5\z\n\q\g\4\1\c\a\c\q\h\9\l\o\i\q\7\f\6\y\m\o\2\0\e\d\i\j\i\x\p\b\a\i\6\1\5\b\u\a\r\0\9\b\3\9\x\j\1\i\0\5\t\6\f\m\i\0\n\8\u\9\p\t\3\2\4\t\0\o\9\n\q\y\5\c\f\8\0\e\1\q\5\m\r\9\9\k\y\9\o\9\g\r\z\5\1\r\r\8\e\m\v\d\w\m\g\g\5\w\1\7\0\6\o\8\h\9\5\1\2\1\q\e\9\9\z\h\w\0\g\0\y\d\b\c\b ]] 00:08:57.666 08:42:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:58.233 08:42:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:58.233 08:42:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:58.233 08:42:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:58.233 08:42:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:58.233 [2024-11-20 08:42:28.989980] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:58.233 [2024-11-20 08:42:28.990104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61466 ] 00:08:58.233 { 00:08:58.233 "subsystems": [ 00:08:58.233 { 00:08:58.233 "subsystem": "bdev", 00:08:58.233 "config": [ 00:08:58.233 { 00:08:58.233 "params": { 00:08:58.233 "block_size": 512, 00:08:58.233 "num_blocks": 1048576, 00:08:58.233 "name": "malloc0" 00:08:58.233 }, 00:08:58.233 "method": "bdev_malloc_create" 00:08:58.233 }, 00:08:58.233 { 00:08:58.233 "params": { 00:08:58.233 "filename": "/dev/zram1", 00:08:58.233 "name": "uring0" 00:08:58.233 }, 00:08:58.233 "method": "bdev_uring_create" 00:08:58.233 }, 00:08:58.233 { 00:08:58.233 "method": "bdev_wait_for_examine" 00:08:58.233 } 00:08:58.233 ] 00:08:58.233 } 00:08:58.233 ] 00:08:58.233 } 00:08:58.233 [2024-11-20 08:42:29.137347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.491 [2024-11-20 08:42:29.221697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.491 [2024-11-20 08:42:29.298142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.869  [2024-11-20T08:42:31.719Z] Copying: 145/512 [MB] (145 MBps) [2024-11-20T08:42:32.656Z] Copying: 293/512 [MB] (147 MBps) [2024-11-20T08:42:33.225Z] Copying: 444/512 [MB] (150 MBps) [2024-11-20T08:42:33.794Z] Copying: 512/512 [MB] (average 148 MBps) 00:09:02.879 00:09:02.879 08:42:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:02.879 08:42:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:02.879 08:42:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:02.879 08:42:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:02.879 08:42:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:02.879 08:42:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:02.879 08:42:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:02.879 08:42:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:02.879 [2024-11-20 08:42:33.637243] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:02.879 [2024-11-20 08:42:33.637367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61554 ] 00:09:02.879 { 00:09:02.879 "subsystems": [ 00:09:02.879 { 00:09:02.879 "subsystem": "bdev", 00:09:02.879 "config": [ 00:09:02.879 { 00:09:02.879 "params": { 00:09:02.879 "block_size": 512, 00:09:02.879 "num_blocks": 1048576, 00:09:02.879 "name": "malloc0" 00:09:02.879 }, 00:09:02.879 "method": "bdev_malloc_create" 00:09:02.879 }, 00:09:02.879 { 00:09:02.879 "params": { 00:09:02.879 "filename": "/dev/zram1", 00:09:02.879 "name": "uring0" 00:09:02.879 }, 00:09:02.879 "method": "bdev_uring_create" 00:09:02.879 }, 00:09:02.879 { 00:09:02.879 "params": { 00:09:02.879 "name": "uring0" 00:09:02.879 }, 00:09:02.879 "method": "bdev_uring_delete" 00:09:02.879 }, 00:09:02.879 { 00:09:02.879 "method": "bdev_wait_for_examine" 00:09:02.879 } 00:09:02.879 ] 00:09:02.879 } 00:09:02.879 ] 00:09:02.879 } 00:09:02.879 [2024-11-20 08:42:33.788244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.138 [2024-11-20 08:42:33.866998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.138 [2024-11-20 08:42:33.938175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.403  [2024-11-20T08:42:34.886Z] Copying: 0/0 [B] (average 0 Bps) 00:09:03.971 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:03.971 08:42:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:03.971 { 00:09:03.971 "subsystems": [ 00:09:03.971 { 00:09:03.971 "subsystem": "bdev", 00:09:03.971 "config": [ 00:09:03.971 { 00:09:03.971 "params": { 00:09:03.971 "block_size": 512, 00:09:03.971 "num_blocks": 1048576, 00:09:03.971 "name": "malloc0" 00:09:03.971 }, 00:09:03.971 "method": "bdev_malloc_create" 00:09:03.971 }, 00:09:03.971 { 00:09:03.971 "params": { 00:09:03.971 "filename": "/dev/zram1", 00:09:03.971 "name": "uring0" 00:09:03.971 }, 00:09:03.971 "method": "bdev_uring_create" 00:09:03.971 }, 00:09:03.971 { 00:09:03.971 "params": { 00:09:03.971 "name": "uring0" 00:09:03.971 }, 00:09:03.971 "method": "bdev_uring_delete" 00:09:03.971 }, 00:09:03.971 { 00:09:03.971 "method": "bdev_wait_for_examine" 00:09:03.971 } 00:09:03.971 ] 00:09:03.971 } 00:09:03.971 ] 00:09:03.971 } 00:09:03.971 [2024-11-20 08:42:34.824876] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:03.971 [2024-11-20 08:42:34.824991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61583 ] 00:09:04.230 [2024-11-20 08:42:34.972905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.230 [2024-11-20 08:42:35.055900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.230 [2024-11-20 08:42:35.130563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.489 [2024-11-20 08:42:35.392872] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:04.489 [2024-11-20 08:42:35.392947] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:04.489 [2024-11-20 08:42:35.392960] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:04.489 [2024-11-20 08:42:35.392972] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:05.057 [2024-11-20 08:42:35.834669] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:05.057 08:42:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:05.624 00:09:05.624 real 0m16.941s 00:09:05.624 user 0m11.265s 00:09:05.624 sys 0m14.056s 00:09:05.624 08:42:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.624 08:42:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:05.624 ************************************ 00:09:05.624 END TEST dd_uring_copy 00:09:05.624 ************************************ 00:09:05.624 00:09:05.624 real 0m17.171s 00:09:05.624 user 0m11.396s 00:09:05.624 sys 0m14.164s 00:09:05.624 08:42:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.624 08:42:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:05.624 ************************************ 00:09:05.624 END TEST spdk_dd_uring 00:09:05.624 ************************************ 00:09:05.624 08:42:36 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:05.624 08:42:36 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.624 08:42:36 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.624 08:42:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:05.624 ************************************ 00:09:05.624 START TEST spdk_dd_sparse 00:09:05.624 ************************************ 00:09:05.624 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:05.624 * Looking for test storage... 00:09:05.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:05.625 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:05.625 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:09:05.625 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.884 --rc genhtml_branch_coverage=1 00:09:05.884 --rc genhtml_function_coverage=1 00:09:05.884 --rc genhtml_legend=1 00:09:05.884 --rc geninfo_all_blocks=1 00:09:05.884 --rc geninfo_unexecuted_blocks=1 00:09:05.884 00:09:05.884 ' 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.884 --rc genhtml_branch_coverage=1 00:09:05.884 --rc genhtml_function_coverage=1 00:09:05.884 --rc genhtml_legend=1 00:09:05.884 --rc geninfo_all_blocks=1 00:09:05.884 --rc geninfo_unexecuted_blocks=1 00:09:05.884 00:09:05.884 ' 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.884 --rc genhtml_branch_coverage=1 00:09:05.884 --rc genhtml_function_coverage=1 00:09:05.884 --rc genhtml_legend=1 00:09:05.884 --rc geninfo_all_blocks=1 00:09:05.884 --rc geninfo_unexecuted_blocks=1 00:09:05.884 00:09:05.884 ' 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.884 --rc genhtml_branch_coverage=1 00:09:05.884 --rc genhtml_function_coverage=1 00:09:05.884 --rc genhtml_legend=1 00:09:05.884 --rc geninfo_all_blocks=1 00:09:05.884 --rc geninfo_unexecuted_blocks=1 00:09:05.884 00:09:05.884 ' 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.884 08:42:36 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:05.885 1+0 records in 00:09:05.885 1+0 records out 00:09:05.885 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00617013 s, 680 MB/s 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:05.885 1+0 records in 00:09:05.885 1+0 records out 00:09:05.885 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00822005 s, 510 MB/s 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:05.885 1+0 records in 00:09:05.885 1+0 records out 00:09:05.885 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00463349 s, 905 MB/s 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:05.885 ************************************ 00:09:05.885 START TEST dd_sparse_file_to_file 00:09:05.885 ************************************ 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:05.885 08:42:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:05.885 { 00:09:05.885 "subsystems": [ 00:09:05.885 { 00:09:05.885 "subsystem": "bdev", 00:09:05.885 "config": [ 00:09:05.885 { 00:09:05.885 "params": { 00:09:05.885 "block_size": 4096, 00:09:05.885 "filename": "dd_sparse_aio_disk", 00:09:05.885 "name": "dd_aio" 00:09:05.885 }, 00:09:05.885 "method": "bdev_aio_create" 00:09:05.885 }, 00:09:05.885 { 00:09:05.885 "params": { 00:09:05.885 "lvs_name": "dd_lvstore", 00:09:05.885 "bdev_name": "dd_aio" 00:09:05.885 }, 00:09:05.885 "method": "bdev_lvol_create_lvstore" 00:09:05.885 }, 00:09:05.885 { 00:09:05.885 "method": "bdev_wait_for_examine" 00:09:05.885 } 00:09:05.885 ] 00:09:05.885 } 00:09:05.885 ] 00:09:05.885 } 00:09:05.885 [2024-11-20 08:42:36.689961] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:05.885 [2024-11-20 08:42:36.690545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61684 ] 00:09:06.143 [2024-11-20 08:42:36.838435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.143 [2024-11-20 08:42:36.919106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.143 [2024-11-20 08:42:36.992179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:06.402  [2024-11-20T08:42:37.576Z] Copying: 12/36 [MB] (average 750 MBps) 00:09:06.661 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:06.661 00:09:06.661 real 0m0.792s 00:09:06.661 user 0m0.488s 00:09:06.661 sys 0m0.452s 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:06.661 ************************************ 00:09:06.661 END TEST dd_sparse_file_to_file 00:09:06.661 ************************************ 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:06.661 ************************************ 00:09:06.661 START TEST dd_sparse_file_to_bdev 00:09:06.661 ************************************ 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:06.661 08:42:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:06.661 [2024-11-20 08:42:37.543382] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:06.661 [2024-11-20 08:42:37.543528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61731 ] 00:09:06.661 { 00:09:06.661 "subsystems": [ 00:09:06.661 { 00:09:06.661 "subsystem": "bdev", 00:09:06.661 "config": [ 00:09:06.661 { 00:09:06.661 "params": { 00:09:06.661 "block_size": 4096, 00:09:06.661 "filename": "dd_sparse_aio_disk", 00:09:06.661 "name": "dd_aio" 00:09:06.661 }, 00:09:06.661 "method": "bdev_aio_create" 00:09:06.661 }, 00:09:06.661 { 00:09:06.661 "params": { 00:09:06.661 "lvs_name": "dd_lvstore", 00:09:06.661 "lvol_name": "dd_lvol", 00:09:06.661 "size_in_mib": 36, 00:09:06.661 "thin_provision": true 00:09:06.661 }, 00:09:06.661 "method": "bdev_lvol_create" 00:09:06.661 }, 00:09:06.661 { 00:09:06.661 "method": "bdev_wait_for_examine" 00:09:06.661 } 00:09:06.661 ] 00:09:06.661 } 00:09:06.661 ] 00:09:06.661 } 00:09:06.920 [2024-11-20 08:42:37.702437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.920 [2024-11-20 08:42:37.787069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.179 [2024-11-20 08:42:37.862445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.179  [2024-11-20T08:42:38.387Z] Copying: 12/36 [MB] (average 521 MBps) 00:09:07.472 00:09:07.472 00:09:07.472 real 0m0.800s 00:09:07.472 user 0m0.534s 00:09:07.472 sys 0m0.425s 00:09:07.472 08:42:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.472 08:42:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:07.472 ************************************ 00:09:07.473 END TEST dd_sparse_file_to_bdev 00:09:07.473 ************************************ 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:07.473 ************************************ 00:09:07.473 START TEST dd_sparse_bdev_to_file 00:09:07.473 ************************************ 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:07.473 08:42:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:07.733 { 00:09:07.733 "subsystems": [ 00:09:07.733 { 00:09:07.733 "subsystem": "bdev", 00:09:07.733 "config": [ 00:09:07.733 { 00:09:07.733 "params": { 00:09:07.733 "block_size": 4096, 00:09:07.733 "filename": "dd_sparse_aio_disk", 00:09:07.733 "name": "dd_aio" 00:09:07.733 }, 00:09:07.733 "method": "bdev_aio_create" 00:09:07.733 }, 00:09:07.733 { 00:09:07.733 "method": "bdev_wait_for_examine" 00:09:07.733 } 00:09:07.733 ] 00:09:07.733 } 00:09:07.733 ] 00:09:07.733 } 00:09:07.733 [2024-11-20 08:42:38.394685] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:07.733 [2024-11-20 08:42:38.394839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61769 ] 00:09:07.733 [2024-11-20 08:42:38.541765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.733 [2024-11-20 08:42:38.620450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.005 [2024-11-20 08:42:38.691557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.005  [2024-11-20T08:42:39.178Z] Copying: 12/36 [MB] (average 1000 MBps) 00:09:08.263 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:08.263 00:09:08.263 real 0m0.757s 00:09:08.263 user 0m0.474s 00:09:08.263 sys 0m0.431s 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:08.263 ************************************ 00:09:08.263 END TEST dd_sparse_bdev_to_file 00:09:08.263 ************************************ 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:08.263 00:09:08.263 real 0m2.789s 00:09:08.263 user 0m1.698s 00:09:08.263 sys 0m1.531s 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.263 08:42:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:08.263 ************************************ 00:09:08.263 END TEST spdk_dd_sparse 00:09:08.263 ************************************ 00:09:08.522 08:42:39 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:08.522 08:42:39 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.522 08:42:39 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.522 08:42:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:08.522 ************************************ 00:09:08.522 START TEST spdk_dd_negative 00:09:08.522 ************************************ 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:08.522 * Looking for test storage... 00:09:08.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.522 --rc genhtml_branch_coverage=1 00:09:08.522 --rc genhtml_function_coverage=1 00:09:08.522 --rc genhtml_legend=1 00:09:08.522 --rc geninfo_all_blocks=1 00:09:08.522 --rc geninfo_unexecuted_blocks=1 00:09:08.522 00:09:08.522 ' 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.522 --rc genhtml_branch_coverage=1 00:09:08.522 --rc genhtml_function_coverage=1 00:09:08.522 --rc genhtml_legend=1 00:09:08.522 --rc geninfo_all_blocks=1 00:09:08.522 --rc geninfo_unexecuted_blocks=1 00:09:08.522 00:09:08.522 ' 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.522 --rc genhtml_branch_coverage=1 00:09:08.522 --rc genhtml_function_coverage=1 00:09:08.522 --rc genhtml_legend=1 00:09:08.522 --rc geninfo_all_blocks=1 00:09:08.522 --rc geninfo_unexecuted_blocks=1 00:09:08.522 00:09:08.522 ' 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.522 --rc genhtml_branch_coverage=1 00:09:08.522 --rc genhtml_function_coverage=1 00:09:08.522 --rc genhtml_legend=1 00:09:08.522 --rc geninfo_all_blocks=1 00:09:08.522 --rc geninfo_unexecuted_blocks=1 00:09:08.522 00:09:08.522 ' 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.522 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:08.781 ************************************ 00:09:08.781 START TEST dd_invalid_arguments 00:09:08.781 ************************************ 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:08.781 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:08.781 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:08.781 00:09:08.781 CPU options: 00:09:08.781 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:08.781 (like [0,1,10]) 00:09:08.781 --lcores lcore to CPU mapping list. The list is in the format: 00:09:08.781 [<,lcores[@CPUs]>...] 00:09:08.781 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:08.781 Within the group, '-' is used for range separator, 00:09:08.781 ',' is used for single number separator. 00:09:08.781 '( )' can be omitted for single element group, 00:09:08.781 '@' can be omitted if cpus and lcores have the same value 00:09:08.781 --disable-cpumask-locks Disable CPU core lock files. 00:09:08.781 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:08.781 pollers in the app support interrupt mode) 00:09:08.781 -p, --main-core main (primary) core for DPDK 00:09:08.781 00:09:08.781 Configuration options: 00:09:08.781 -c, --config, --json JSON config file 00:09:08.781 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:08.781 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:08.781 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:08.781 --rpcs-allowed comma-separated list of permitted RPCS 00:09:08.781 --json-ignore-init-errors don't exit on invalid config entry 00:09:08.781 00:09:08.781 Memory options: 00:09:08.781 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:08.781 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:08.781 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:08.781 -R, --huge-unlink unlink huge files after initialization 00:09:08.781 -n, --mem-channels number of memory channels used for DPDK 00:09:08.781 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:08.781 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:08.781 --no-huge run without using hugepages 00:09:08.781 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:08.781 -i, --shm-id shared memory ID (optional) 00:09:08.781 -g, --single-file-segments force creating just one hugetlbfs file 00:09:08.781 00:09:08.781 PCI options: 00:09:08.781 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:08.781 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:08.781 -u, --no-pci disable PCI access 00:09:08.781 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:08.781 00:09:08.781 Log options: 00:09:08.781 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:08.781 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:08.781 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:08.781 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:08.781 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:09:08.781 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:09:08.781 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:09:08.781 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:09:08.782 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:09:08.782 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:09:08.782 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:08.782 --silence-noticelog disable notice level logging to stderr 00:09:08.782 00:09:08.782 Trace options: 00:09:08.782 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:08.782 setting 0 to disable trace (default 32768) 00:09:08.782 Tracepoints vary in size and can use more than one trace entry. 00:09:08.782 -e, --tpoint-group [:] 00:09:08.782 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:08.782 [2024-11-20 08:42:39.499682] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:09:08.782 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:09:08.782 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:09:08.782 bdev_raid, scheduler, all). 00:09:08.782 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:08.782 a tracepoint group. First tpoint inside a group can be enabled by 00:09:08.782 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:08.782 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:08.782 in /include/spdk_internal/trace_defs.h 00:09:08.782 00:09:08.782 Other options: 00:09:08.782 -h, --help show this usage 00:09:08.782 -v, --version print SPDK version 00:09:08.782 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:08.782 --env-context Opaque context for use of the env implementation 00:09:08.782 00:09:08.782 Application specific: 00:09:08.782 [--------- DD Options ---------] 00:09:08.782 --if Input file. Must specify either --if or --ib. 00:09:08.782 --ib Input bdev. Must specifier either --if or --ib 00:09:08.782 --of Output file. Must specify either --of or --ob. 00:09:08.782 --ob Output bdev. Must specify either --of or --ob. 00:09:08.782 --iflag Input file flags. 00:09:08.782 --oflag Output file flags. 00:09:08.782 --bs I/O unit size (default: 4096) 00:09:08.782 --qd Queue depth (default: 2) 00:09:08.782 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:08.782 --skip Skip this many I/O units at start of input. (default: 0) 00:09:08.782 --seek Skip this many I/O units at start of output. (default: 0) 00:09:08.782 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:08.782 --sparse Enable hole skipping in input target 00:09:08.782 Available iflag and oflag values: 00:09:08.782 append - append mode 00:09:08.782 direct - use direct I/O for data 00:09:08.782 directory - fail unless a directory 00:09:08.782 dsync - use synchronized I/O for data 00:09:08.782 noatime - do not update access time 00:09:08.782 noctty - do not assign controlling terminal from file 00:09:08.782 nofollow - do not follow symlinks 00:09:08.782 nonblock - use non-blocking I/O 00:09:08.782 sync - use synchronized I/O for data and metadata 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.782 00:09:08.782 real 0m0.081s 00:09:08.782 user 0m0.044s 00:09:08.782 sys 0m0.036s 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:08.782 ************************************ 00:09:08.782 END TEST dd_invalid_arguments 00:09:08.782 ************************************ 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:08.782 ************************************ 00:09:08.782 START TEST dd_double_input 00:09:08.782 ************************************ 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:08.782 [2024-11-20 08:42:39.630971] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.782 00:09:08.782 real 0m0.081s 00:09:08.782 user 0m0.052s 00:09:08.782 sys 0m0.027s 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.782 ************************************ 00:09:08.782 END TEST dd_double_input 00:09:08.782 ************************************ 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.782 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:09.040 ************************************ 00:09:09.040 START TEST dd_double_output 00:09:09.040 ************************************ 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:09.040 [2024-11-20 08:42:39.763138] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:09.040 00:09:09.040 real 0m0.096s 00:09:09.040 user 0m0.055s 00:09:09.040 sys 0m0.039s 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:09.040 ************************************ 00:09:09.040 END TEST dd_double_output 00:09:09.040 ************************************ 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:09.040 ************************************ 00:09:09.040 START TEST dd_no_input 00:09:09.040 ************************************ 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:09.040 [2024-11-20 08:42:39.906876] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:09.040 00:09:09.040 real 0m0.083s 00:09:09.040 user 0m0.048s 00:09:09.040 sys 0m0.034s 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.040 08:42:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:09.040 ************************************ 00:09:09.040 END TEST dd_no_input 00:09:09.040 ************************************ 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:09.298 ************************************ 00:09:09.298 START TEST dd_no_output 00:09:09.298 ************************************ 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:09.298 08:42:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:09.298 [2024-11-20 08:42:40.045294] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:09.299 00:09:09.299 real 0m0.084s 00:09:09.299 user 0m0.055s 00:09:09.299 sys 0m0.029s 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:09.299 ************************************ 00:09:09.299 END TEST dd_no_output 00:09:09.299 ************************************ 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:09.299 ************************************ 00:09:09.299 START TEST dd_wrong_blocksize 00:09:09.299 ************************************ 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:09.299 [2024-11-20 08:42:40.186304] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:09.299 00:09:09.299 real 0m0.085s 00:09:09.299 user 0m0.052s 00:09:09.299 sys 0m0.031s 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.299 ************************************ 00:09:09.299 END TEST dd_wrong_blocksize 00:09:09.299 08:42:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:09.299 ************************************ 00:09:09.556 08:42:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:09.556 08:42:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.556 08:42:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.556 08:42:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:09.556 ************************************ 00:09:09.556 START TEST dd_smaller_blocksize 00:09:09.556 ************************************ 00:09:09.556 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:09:09.556 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:09.556 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:09.556 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:09.556 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.557 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.557 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.557 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.557 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.557 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.557 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:09.557 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:09.557 08:42:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:09.557 [2024-11-20 08:42:40.317222] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:09.557 [2024-11-20 08:42:40.317322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62001 ] 00:09:09.557 [2024-11-20 08:42:40.462829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.816 [2024-11-20 08:42:40.525940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.816 [2024-11-20 08:42:40.580671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.074 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:10.333 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:10.333 [2024-11-20 08:42:41.185482] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:10.333 [2024-11-20 08:42:41.185578] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:10.592 [2024-11-20 08:42:41.303180] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:10.592 00:09:10.592 real 0m1.106s 00:09:10.592 user 0m0.395s 00:09:10.592 sys 0m0.604s 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.592 ************************************ 00:09:10.592 END TEST dd_smaller_blocksize 00:09:10.592 ************************************ 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:10.592 ************************************ 00:09:10.592 START TEST dd_invalid_count 00:09:10.592 ************************************ 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.592 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:10.592 [2024-11-20 08:42:41.484503] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:10.852 00:09:10.852 real 0m0.086s 00:09:10.852 user 0m0.054s 00:09:10.852 sys 0m0.030s 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.852 ************************************ 00:09:10.852 END TEST dd_invalid_count 00:09:10.852 ************************************ 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:10.852 ************************************ 00:09:10.852 START TEST dd_invalid_oflag 00:09:10.852 ************************************ 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:10.852 [2024-11-20 08:42:41.614991] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:10.852 00:09:10.852 real 0m0.082s 00:09:10.852 user 0m0.049s 00:09:10.852 sys 0m0.032s 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:10.852 ************************************ 00:09:10.852 END TEST dd_invalid_oflag 00:09:10.852 ************************************ 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:10.852 ************************************ 00:09:10.852 START TEST dd_invalid_iflag 00:09:10.852 ************************************ 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.852 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.853 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.853 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.853 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.853 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:10.853 [2024-11-20 08:42:41.746541] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:09:11.111 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:09:11.111 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:11.111 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:11.112 00:09:11.112 real 0m0.074s 00:09:11.112 user 0m0.042s 00:09:11.112 sys 0m0.032s 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:11.112 ************************************ 00:09:11.112 END TEST dd_invalid_iflag 00:09:11.112 ************************************ 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:11.112 ************************************ 00:09:11.112 START TEST dd_unknown_flag 00:09:11.112 ************************************ 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:11.112 08:42:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:11.112 [2024-11-20 08:42:41.871717] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:11.112 [2024-11-20 08:42:41.871830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62093 ] 00:09:11.112 [2024-11-20 08:42:42.018053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.370 [2024-11-20 08:42:42.083838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.370 [2024-11-20 08:42:42.139148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.370 [2024-11-20 08:42:42.177134] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:11.370 [2024-11-20 08:42:42.177201] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.370 [2024-11-20 08:42:42.177259] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:11.370 [2024-11-20 08:42:42.177273] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.370 [2024-11-20 08:42:42.177503] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:11.370 [2024-11-20 08:42:42.177520] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.370 [2024-11-20 08:42:42.177580] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:11.370 [2024-11-20 08:42:42.177592] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:11.629 [2024-11-20 08:42:42.298568] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:11.629 00:09:11.629 real 0m0.550s 00:09:11.629 user 0m0.308s 00:09:11.629 sys 0m0.150s 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:11.629 ************************************ 00:09:11.629 END TEST dd_unknown_flag 00:09:11.629 ************************************ 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:11.629 ************************************ 00:09:11.629 START TEST dd_invalid_json 00:09:11.629 ************************************ 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:11.629 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:11.629 [2024-11-20 08:42:42.481039] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:11.629 [2024-11-20 08:42:42.481186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62127 ] 00:09:11.888 [2024-11-20 08:42:42.630896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.888 [2024-11-20 08:42:42.694205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.888 [2024-11-20 08:42:42.694285] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:11.888 [2024-11-20 08:42:42.694303] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:11.888 [2024-11-20 08:42:42.694313] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.888 [2024-11-20 08:42:42.694352] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:11.888 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:09:11.888 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:11.888 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:09:11.889 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:09:11.889 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:09:11.889 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:11.889 00:09:11.889 real 0m0.339s 00:09:11.889 user 0m0.175s 00:09:11.889 sys 0m0.063s 00:09:11.889 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.889 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:11.889 ************************************ 00:09:11.889 END TEST dd_invalid_json 00:09:11.889 ************************************ 00:09:12.147 08:42:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:12.147 08:42:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.147 08:42:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.147 08:42:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:12.147 ************************************ 00:09:12.147 START TEST dd_invalid_seek 00:09:12.147 ************************************ 00:09:12.147 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:09:12.147 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:12.147 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:12.147 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:12.147 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:12.147 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:12.147 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:12.148 08:42:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:12.148 [2024-11-20 08:42:42.870731] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:12.148 [2024-11-20 08:42:42.871090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62151 ] 00:09:12.148 { 00:09:12.148 "subsystems": [ 00:09:12.148 { 00:09:12.148 "subsystem": "bdev", 00:09:12.148 "config": [ 00:09:12.148 { 00:09:12.148 "params": { 00:09:12.148 "block_size": 512, 00:09:12.148 "num_blocks": 512, 00:09:12.148 "name": "malloc0" 00:09:12.148 }, 00:09:12.148 "method": "bdev_malloc_create" 00:09:12.148 }, 00:09:12.148 { 00:09:12.148 "params": { 00:09:12.148 "block_size": 512, 00:09:12.148 "num_blocks": 512, 00:09:12.148 "name": "malloc1" 00:09:12.148 }, 00:09:12.148 "method": "bdev_malloc_create" 00:09:12.148 }, 00:09:12.148 { 00:09:12.148 "method": "bdev_wait_for_examine" 00:09:12.148 } 00:09:12.148 ] 00:09:12.148 } 00:09:12.148 ] 00:09:12.148 } 00:09:12.148 [2024-11-20 08:42:43.014103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.407 [2024-11-20 08:42:43.080545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.407 [2024-11-20 08:42:43.137415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.407 [2024-11-20 08:42:43.202718] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:12.407 [2024-11-20 08:42:43.202791] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:12.666 [2024-11-20 08:42:43.332438] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.666 00:09:12.666 real 0m0.589s 00:09:12.666 user 0m0.392s 00:09:12.666 sys 0m0.155s 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:12.666 ************************************ 00:09:12.666 END TEST dd_invalid_seek 00:09:12.666 ************************************ 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.666 08:42:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:12.666 ************************************ 00:09:12.666 START TEST dd_invalid_skip 00:09:12.666 ************************************ 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:12.667 08:42:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:12.667 { 00:09:12.667 "subsystems": [ 00:09:12.667 { 00:09:12.667 "subsystem": "bdev", 00:09:12.667 "config": [ 00:09:12.667 { 00:09:12.667 "params": { 00:09:12.667 "block_size": 512, 00:09:12.667 "num_blocks": 512, 00:09:12.667 "name": "malloc0" 00:09:12.667 }, 00:09:12.667 "method": "bdev_malloc_create" 00:09:12.667 }, 00:09:12.667 { 00:09:12.667 "params": { 00:09:12.667 "block_size": 512, 00:09:12.667 "num_blocks": 512, 00:09:12.667 "name": "malloc1" 00:09:12.667 }, 00:09:12.667 "method": "bdev_malloc_create" 00:09:12.667 }, 00:09:12.667 { 00:09:12.667 "method": "bdev_wait_for_examine" 00:09:12.667 } 00:09:12.667 ] 00:09:12.667 } 00:09:12.667 ] 00:09:12.667 } 00:09:12.667 [2024-11-20 08:42:43.523979] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:12.667 [2024-11-20 08:42:43.524094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62190 ] 00:09:12.926 [2024-11-20 08:42:43.671653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.926 [2024-11-20 08:42:43.756087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.926 [2024-11-20 08:42:43.816451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.186 [2024-11-20 08:42:43.882133] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:13.186 [2024-11-20 08:42:43.882202] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:13.186 [2024-11-20 08:42:44.006043] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:13.186 ************************************ 00:09:13.186 END TEST dd_invalid_skip 00:09:13.186 ************************************ 00:09:13.186 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:09:13.186 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:13.186 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:09:13.186 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:09:13.186 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:09:13.186 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:13.186 00:09:13.186 real 0m0.619s 00:09:13.186 user 0m0.398s 00:09:13.186 sys 0m0.170s 00:09:13.186 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.186 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:13.445 08:42:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:13.445 08:42:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.445 08:42:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.445 08:42:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:13.445 ************************************ 00:09:13.445 START TEST dd_invalid_input_count 00:09:13.445 ************************************ 00:09:13.445 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:09:13.445 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:13.446 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:13.446 [2024-11-20 08:42:44.190578] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:13.446 [2024-11-20 08:42:44.190753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62223 ] 00:09:13.446 { 00:09:13.446 "subsystems": [ 00:09:13.446 { 00:09:13.446 "subsystem": "bdev", 00:09:13.446 "config": [ 00:09:13.446 { 00:09:13.446 "params": { 00:09:13.446 "block_size": 512, 00:09:13.446 "num_blocks": 512, 00:09:13.446 "name": "malloc0" 00:09:13.446 }, 00:09:13.446 "method": "bdev_malloc_create" 00:09:13.446 }, 00:09:13.446 { 00:09:13.446 "params": { 00:09:13.446 "block_size": 512, 00:09:13.446 "num_blocks": 512, 00:09:13.446 "name": "malloc1" 00:09:13.446 }, 00:09:13.446 "method": "bdev_malloc_create" 00:09:13.446 }, 00:09:13.446 { 00:09:13.446 "method": "bdev_wait_for_examine" 00:09:13.446 } 00:09:13.446 ] 00:09:13.446 } 00:09:13.446 ] 00:09:13.446 } 00:09:13.446 [2024-11-20 08:42:44.334983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.705 [2024-11-20 08:42:44.400918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.705 [2024-11-20 08:42:44.458169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.705 [2024-11-20 08:42:44.524202] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:13.705 [2024-11-20 08:42:44.524285] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:13.964 [2024-11-20 08:42:44.684665] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:13.964 ************************************ 00:09:13.964 END TEST dd_invalid_input_count 00:09:13.964 ************************************ 00:09:13.964 00:09:13.964 real 0m0.649s 00:09:13.964 user 0m0.440s 00:09:13.964 sys 0m0.168s 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:13.964 ************************************ 00:09:13.964 START TEST dd_invalid_output_count 00:09:13.964 ************************************ 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:13.964 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:13.965 08:42:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:14.224 { 00:09:14.224 "subsystems": [ 00:09:14.224 { 00:09:14.224 "subsystem": "bdev", 00:09:14.224 "config": [ 00:09:14.224 { 00:09:14.224 "params": { 00:09:14.224 "block_size": 512, 00:09:14.224 "num_blocks": 512, 00:09:14.224 "name": "malloc0" 00:09:14.224 }, 00:09:14.224 "method": "bdev_malloc_create" 00:09:14.224 }, 00:09:14.224 { 00:09:14.224 "method": "bdev_wait_for_examine" 00:09:14.224 } 00:09:14.224 ] 00:09:14.224 } 00:09:14.224 ] 00:09:14.224 } 00:09:14.224 [2024-11-20 08:42:44.900711] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:14.224 [2024-11-20 08:42:44.900844] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62257 ] 00:09:14.224 [2024-11-20 08:42:45.053389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.483 [2024-11-20 08:42:45.143094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.483 [2024-11-20 08:42:45.225606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.483 [2024-11-20 08:42:45.294098] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:14.483 [2024-11-20 08:42:45.294179] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:14.742 [2024-11-20 08:42:45.470414] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.742 00:09:14.742 real 0m0.721s 00:09:14.742 user 0m0.459s 00:09:14.742 sys 0m0.218s 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.742 ************************************ 00:09:14.742 END TEST dd_invalid_output_count 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:14.742 ************************************ 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:14.742 ************************************ 00:09:14.742 START TEST dd_bs_not_multiple 00:09:14.742 ************************************ 00:09:14.742 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:14.743 08:42:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:15.002 [2024-11-20 08:42:45.674562] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:15.002 [2024-11-20 08:42:45.674684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62294 ] 00:09:15.002 { 00:09:15.002 "subsystems": [ 00:09:15.002 { 00:09:15.002 "subsystem": "bdev", 00:09:15.002 "config": [ 00:09:15.002 { 00:09:15.002 "params": { 00:09:15.002 "block_size": 512, 00:09:15.002 "num_blocks": 512, 00:09:15.002 "name": "malloc0" 00:09:15.002 }, 00:09:15.002 "method": "bdev_malloc_create" 00:09:15.002 }, 00:09:15.002 { 00:09:15.002 "params": { 00:09:15.002 "block_size": 512, 00:09:15.002 "num_blocks": 512, 00:09:15.002 "name": "malloc1" 00:09:15.002 }, 00:09:15.002 "method": "bdev_malloc_create" 00:09:15.002 }, 00:09:15.002 { 00:09:15.002 "method": "bdev_wait_for_examine" 00:09:15.002 } 00:09:15.002 ] 00:09:15.002 } 00:09:15.002 ] 00:09:15.002 } 00:09:15.002 [2024-11-20 08:42:45.818311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.002 [2024-11-20 08:42:45.899770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.262 [2024-11-20 08:42:45.979203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.262 [2024-11-20 08:42:46.056666] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:15.262 [2024-11-20 08:42:46.056744] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:15.521 [2024-11-20 08:42:46.233354] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:15.521 08:42:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:09:15.521 08:42:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.522 08:42:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:09:15.522 08:42:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:09:15.522 08:42:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:09:15.522 08:42:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.522 00:09:15.522 real 0m0.706s 00:09:15.522 user 0m0.460s 00:09:15.522 sys 0m0.206s 00:09:15.522 08:42:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.522 08:42:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:15.522 ************************************ 00:09:15.522 END TEST dd_bs_not_multiple 00:09:15.522 ************************************ 00:09:15.522 ************************************ 00:09:15.522 00:09:15.522 real 0m7.158s 00:09:15.522 user 0m3.891s 00:09:15.522 sys 0m2.690s 00:09:15.522 08:42:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.522 08:42:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:15.522 END TEST spdk_dd_negative 00:09:15.522 ************************************ 00:09:15.522 00:09:15.522 real 1m29.891s 00:09:15.522 user 0m58.110s 00:09:15.522 sys 0m40.457s 00:09:15.522 08:42:46 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.522 08:42:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:15.522 ************************************ 00:09:15.522 END TEST spdk_dd 00:09:15.522 ************************************ 00:09:15.781 08:42:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:15.781 08:42:46 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:15.781 08:42:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:15.781 08:42:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:15.781 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:09:15.781 08:42:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:15.781 08:42:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:15.781 08:42:46 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:15.781 08:42:46 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:15.781 08:42:46 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:15.781 08:42:46 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:15.781 08:42:46 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:15.781 08:42:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:15.781 08:42:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.781 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:09:15.781 ************************************ 00:09:15.781 START TEST nvmf_tcp 00:09:15.781 ************************************ 00:09:15.781 08:42:46 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:15.781 * Looking for test storage... 00:09:15.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:15.781 08:42:46 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.781 08:42:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.781 08:42:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.781 08:42:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.781 08:42:46 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.781 08:42:46 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.781 08:42:46 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.781 08:42:46 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.781 08:42:46 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.781 08:42:46 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.781 08:42:46 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.781 08:42:46 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.781 08:42:46 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.781 08:42:46 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.781 08:42:46 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.040 08:42:46 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:16.040 08:42:46 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.040 08:42:46 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.040 --rc genhtml_branch_coverage=1 00:09:16.040 --rc genhtml_function_coverage=1 00:09:16.040 --rc genhtml_legend=1 00:09:16.040 --rc geninfo_all_blocks=1 00:09:16.040 --rc geninfo_unexecuted_blocks=1 00:09:16.040 00:09:16.040 ' 00:09:16.040 08:42:46 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.040 --rc genhtml_branch_coverage=1 00:09:16.040 --rc genhtml_function_coverage=1 00:09:16.040 --rc genhtml_legend=1 00:09:16.040 --rc geninfo_all_blocks=1 00:09:16.040 --rc geninfo_unexecuted_blocks=1 00:09:16.040 00:09:16.041 ' 00:09:16.041 08:42:46 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.041 --rc genhtml_branch_coverage=1 00:09:16.041 --rc genhtml_function_coverage=1 00:09:16.041 --rc genhtml_legend=1 00:09:16.041 --rc geninfo_all_blocks=1 00:09:16.041 --rc geninfo_unexecuted_blocks=1 00:09:16.041 00:09:16.041 ' 00:09:16.041 08:42:46 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.041 --rc genhtml_branch_coverage=1 00:09:16.041 --rc genhtml_function_coverage=1 00:09:16.041 --rc genhtml_legend=1 00:09:16.041 --rc geninfo_all_blocks=1 00:09:16.041 --rc geninfo_unexecuted_blocks=1 00:09:16.041 00:09:16.041 ' 00:09:16.041 08:42:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:16.041 08:42:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:16.041 08:42:46 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:16.041 08:42:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.041 08:42:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.041 08:42:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:16.041 ************************************ 00:09:16.041 START TEST nvmf_target_core 00:09:16.041 ************************************ 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:16.041 * Looking for test storage... 00:09:16.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.041 --rc genhtml_branch_coverage=1 00:09:16.041 --rc genhtml_function_coverage=1 00:09:16.041 --rc genhtml_legend=1 00:09:16.041 --rc geninfo_all_blocks=1 00:09:16.041 --rc geninfo_unexecuted_blocks=1 00:09:16.041 00:09:16.041 ' 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.041 --rc genhtml_branch_coverage=1 00:09:16.041 --rc genhtml_function_coverage=1 00:09:16.041 --rc genhtml_legend=1 00:09:16.041 --rc geninfo_all_blocks=1 00:09:16.041 --rc geninfo_unexecuted_blocks=1 00:09:16.041 00:09:16.041 ' 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.041 --rc genhtml_branch_coverage=1 00:09:16.041 --rc genhtml_function_coverage=1 00:09:16.041 --rc genhtml_legend=1 00:09:16.041 --rc geninfo_all_blocks=1 00:09:16.041 --rc geninfo_unexecuted_blocks=1 00:09:16.041 00:09:16.041 ' 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.041 --rc genhtml_branch_coverage=1 00:09:16.041 --rc genhtml_function_coverage=1 00:09:16.041 --rc genhtml_legend=1 00:09:16.041 --rc geninfo_all_blocks=1 00:09:16.041 --rc geninfo_unexecuted_blocks=1 00:09:16.041 00:09:16.041 ' 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.041 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.042 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.042 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.042 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.042 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.042 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.042 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:16.042 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:16.042 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:16.042 08:42:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:16.042 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.042 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.369 08:42:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.369 ************************************ 00:09:16.369 START TEST nvmf_host_management 00:09:16.369 ************************************ 00:09:16.369 08:42:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:16.369 * Looking for test storage... 00:09:16.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:16.369 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.370 --rc genhtml_branch_coverage=1 00:09:16.370 --rc genhtml_function_coverage=1 00:09:16.370 --rc genhtml_legend=1 00:09:16.370 --rc geninfo_all_blocks=1 00:09:16.370 --rc geninfo_unexecuted_blocks=1 00:09:16.370 00:09:16.370 ' 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.370 --rc genhtml_branch_coverage=1 00:09:16.370 --rc genhtml_function_coverage=1 00:09:16.370 --rc genhtml_legend=1 00:09:16.370 --rc geninfo_all_blocks=1 00:09:16.370 --rc geninfo_unexecuted_blocks=1 00:09:16.370 00:09:16.370 ' 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.370 --rc genhtml_branch_coverage=1 00:09:16.370 --rc genhtml_function_coverage=1 00:09:16.370 --rc genhtml_legend=1 00:09:16.370 --rc geninfo_all_blocks=1 00:09:16.370 --rc geninfo_unexecuted_blocks=1 00:09:16.370 00:09:16.370 ' 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.370 --rc genhtml_branch_coverage=1 00:09:16.370 --rc genhtml_function_coverage=1 00:09:16.370 --rc genhtml_legend=1 00:09:16.370 --rc geninfo_all_blocks=1 00:09:16.370 --rc geninfo_unexecuted_blocks=1 00:09:16.370 00:09:16.370 ' 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.370 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:16.370 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:16.371 Cannot find device "nvmf_init_br" 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:16.371 Cannot find device "nvmf_init_br2" 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:16.371 Cannot find device "nvmf_tgt_br" 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.371 Cannot find device "nvmf_tgt_br2" 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:16.371 Cannot find device "nvmf_init_br" 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:16.371 Cannot find device "nvmf_init_br2" 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:16.371 Cannot find device "nvmf_tgt_br" 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:16.371 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:16.630 Cannot find device "nvmf_tgt_br2" 00:09:16.630 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:16.630 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:16.630 Cannot find device "nvmf_br" 00:09:16.630 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:16.630 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:16.630 Cannot find device "nvmf_init_if" 00:09:16.630 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:16.630 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:16.630 Cannot find device "nvmf_init_if2" 00:09:16.630 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:16.630 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.630 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:16.631 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:16.891 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:16.891 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:09:16.891 00:09:16.891 --- 10.0.0.3 ping statistics --- 00:09:16.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.891 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:16.891 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:16.891 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:09:16.891 00:09:16.891 --- 10.0.0.4 ping statistics --- 00:09:16.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.891 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:16.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:09:16.891 00:09:16.891 --- 10.0.0.1 ping statistics --- 00:09:16.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.891 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:16.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:09:16.891 00:09:16.891 --- 10.0.0.2 ping statistics --- 00:09:16.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.891 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62637 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62637 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62637 ']' 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.891 08:42:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:16.891 [2024-11-20 08:42:47.795217] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:16.891 [2024-11-20 08:42:47.795351] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.150 [2024-11-20 08:42:47.950774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.150 [2024-11-20 08:42:48.050216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.150 [2024-11-20 08:42:48.050550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.150 [2024-11-20 08:42:48.050710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.150 [2024-11-20 08:42:48.050923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.150 [2024-11-20 08:42:48.051178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.150 [2024-11-20 08:42:48.052817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.150 [2024-11-20 08:42:48.053095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:17.150 [2024-11-20 08:42:48.053097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.151 [2024-11-20 08:42:48.052965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.409 [2024-11-20 08:42:48.131065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.346 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.346 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:18.346 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.346 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.346 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.346 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.346 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.346 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.346 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.346 [2024-11-20 08:42:48.956575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.346 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.347 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:18.347 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.347 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.347 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:18.347 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:18.347 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:18.347 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.347 08:42:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.347 Malloc0 00:09:18.347 [2024-11-20 08:42:49.040603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62691 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62691 /var/tmp/bdevperf.sock 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62691 ']' 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:18.347 { 00:09:18.347 "params": { 00:09:18.347 "name": "Nvme$subsystem", 00:09:18.347 "trtype": "$TEST_TRANSPORT", 00:09:18.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.347 "adrfam": "ipv4", 00:09:18.347 "trsvcid": "$NVMF_PORT", 00:09:18.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.347 "hdgst": ${hdgst:-false}, 00:09:18.347 "ddgst": ${ddgst:-false} 00:09:18.347 }, 00:09:18.347 "method": "bdev_nvme_attach_controller" 00:09:18.347 } 00:09:18.347 EOF 00:09:18.347 )") 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:18.347 08:42:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:18.347 "params": { 00:09:18.347 "name": "Nvme0", 00:09:18.347 "trtype": "tcp", 00:09:18.347 "traddr": "10.0.0.3", 00:09:18.347 "adrfam": "ipv4", 00:09:18.347 "trsvcid": "4420", 00:09:18.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:18.347 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:18.347 "hdgst": false, 00:09:18.347 "ddgst": false 00:09:18.347 }, 00:09:18.347 "method": "bdev_nvme_attach_controller" 00:09:18.347 }' 00:09:18.347 [2024-11-20 08:42:49.151008] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:18.347 [2024-11-20 08:42:49.151125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62691 ] 00:09:18.605 [2024-11-20 08:42:49.306856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.606 [2024-11-20 08:42:49.394982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.606 [2024-11-20 08:42:49.479608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.863 Running I/O for 10 seconds... 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:19.431 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:19.692 [2024-11-20 08:42:50.363232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:19.692 [2024-11-20 08:42:50.363296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.363313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:19.692 [2024-11-20 08:42:50.363323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.363333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:19.692 [2024-11-20 08:42:50.363342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.363353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:19.692 [2024-11-20 08:42:50.363362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.363372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d5ce0 is same with the state(6) to be set 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.692 08:42:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:19.692 [2024-11-20 08:42:50.385112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.692 [2024-11-20 08:42:50.385722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.692 [2024-11-20 08:42:50.385734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.385753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.385773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.385793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.385826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.385846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.385885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.385906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.385927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.385948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.385968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.385988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.385997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.693 [2024-11-20 08:42:50.386333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.693 [2024-11-20 08:42:50.386342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.694 [2024-11-20 08:42:50.386353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.694 [2024-11-20 08:42:50.386362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.694 [2024-11-20 08:42:50.386373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.694 [2024-11-20 08:42:50.386382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.694 [2024-11-20 08:42:50.386392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.694 [2024-11-20 08:42:50.386401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.694 [2024-11-20 08:42:50.386412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.694 [2024-11-20 08:42:50.386421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.694 [2024-11-20 08:42:50.386432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.694 [2024-11-20 08:42:50.386441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.694 [2024-11-20 08:42:50.386452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.694 [2024-11-20 08:42:50.386461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.694 [2024-11-20 08:42:50.386471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.694 [2024-11-20 08:42:50.386480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.694 [2024-11-20 08:42:50.386491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.694 [2024-11-20 08:42:50.386500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.694 [2024-11-20 08:42:50.386511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.694 [2024-11-20 08:42:50.386525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.694 [2024-11-20 08:42:50.386536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d02d0 is same with the state(6) to be set 00:09:19.694 task offset: 8192 on job bdev=Nvme0n1 fails 00:09:19.694 00:09:19.694 Latency(us) 00:09:19.694 [2024-11-20T08:42:50.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.694 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:19.694 Job: Nvme0n1 ended in about 0.77 seconds with error 00:09:19.694 Verification LBA range: start 0x0 length 0x400 00:09:19.694 Nvme0n1 : 0.77 1410.85 88.18 82.99 0.00 41858.33 2055.45 40036.54 00:09:19.694 [2024-11-20T08:42:50.609Z] =================================================================================================================== 00:09:19.694 [2024-11-20T08:42:50.609Z] Total : 1410.85 88.18 82.99 0.00 41858.33 2055.45 40036.54 00:09:19.694 [2024-11-20 08:42:50.386717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d5ce0 (9): Bad file descriptor 00:09:19.694 [2024-11-20 08:42:50.387876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:19.694 [2024-11-20 08:42:50.390185] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:19.694 [2024-11-20 08:42:50.393309] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62691 00:09:20.655 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62691) - No such process 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:20.655 { 00:09:20.655 "params": { 00:09:20.655 "name": "Nvme$subsystem", 00:09:20.655 "trtype": "$TEST_TRANSPORT", 00:09:20.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.655 "adrfam": "ipv4", 00:09:20.655 "trsvcid": "$NVMF_PORT", 00:09:20.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.655 "hdgst": ${hdgst:-false}, 00:09:20.655 "ddgst": ${ddgst:-false} 00:09:20.655 }, 00:09:20.655 "method": "bdev_nvme_attach_controller" 00:09:20.655 } 00:09:20.655 EOF 00:09:20.655 )") 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:20.655 08:42:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:20.655 "params": { 00:09:20.655 "name": "Nvme0", 00:09:20.655 "trtype": "tcp", 00:09:20.655 "traddr": "10.0.0.3", 00:09:20.655 "adrfam": "ipv4", 00:09:20.655 "trsvcid": "4420", 00:09:20.655 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:20.655 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:20.655 "hdgst": false, 00:09:20.655 "ddgst": false 00:09:20.655 }, 00:09:20.655 "method": "bdev_nvme_attach_controller" 00:09:20.655 }' 00:09:20.655 [2024-11-20 08:42:51.444627] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:20.656 [2024-11-20 08:42:51.444746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62729 ] 00:09:20.914 [2024-11-20 08:42:51.599256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.914 [2024-11-20 08:42:51.682335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.914 [2024-11-20 08:42:51.766809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:21.172 Running I/O for 1 seconds... 00:09:22.109 1408.00 IOPS, 88.00 MiB/s 00:09:22.109 Latency(us) 00:09:22.109 [2024-11-20T08:42:53.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.109 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:22.109 Verification LBA range: start 0x0 length 0x400 00:09:22.109 Nvme0n1 : 1.02 1446.85 90.43 0.00 0.00 43357.69 5123.72 40036.54 00:09:22.109 [2024-11-20T08:42:53.024Z] =================================================================================================================== 00:09:22.109 [2024-11-20T08:42:53.024Z] Total : 1446.85 90.43 0.00 0.00 43357.69 5123.72 40036.54 00:09:22.369 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:22.369 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:22.369 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:22.369 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:22.369 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:22.369 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:22.369 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:22.369 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.369 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:22.369 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.369 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.369 rmmod nvme_tcp 00:09:22.628 rmmod nvme_fabrics 00:09:22.628 rmmod nvme_keyring 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62637 ']' 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62637 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62637 ']' 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62637 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62637 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:22.628 killing process with pid 62637 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62637' 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62637 00:09:22.628 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62637 00:09:22.887 [2024-11-20 08:42:53.636022] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.887 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:22.888 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:22.888 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:22.888 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:22.888 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:23.146 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:23.146 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:23.146 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:23.146 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:23.146 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:23.146 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.147 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.147 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.147 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:23.147 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:23.147 00:09:23.147 real 0m6.977s 00:09:23.147 user 0m25.538s 00:09:23.147 sys 0m1.936s 00:09:23.147 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.147 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:23.147 ************************************ 00:09:23.147 END TEST nvmf_host_management 00:09:23.147 ************************************ 00:09:23.147 08:42:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:23.147 08:42:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:23.147 08:42:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.147 08:42:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.147 ************************************ 00:09:23.147 START TEST nvmf_lvol 00:09:23.147 ************************************ 00:09:23.147 08:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:23.406 * Looking for test storage... 00:09:23.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:23.406 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:23.406 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:23.406 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:23.406 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:23.406 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.406 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.406 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.406 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:23.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.407 --rc genhtml_branch_coverage=1 00:09:23.407 --rc genhtml_function_coverage=1 00:09:23.407 --rc genhtml_legend=1 00:09:23.407 --rc geninfo_all_blocks=1 00:09:23.407 --rc geninfo_unexecuted_blocks=1 00:09:23.407 00:09:23.407 ' 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:23.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.407 --rc genhtml_branch_coverage=1 00:09:23.407 --rc genhtml_function_coverage=1 00:09:23.407 --rc genhtml_legend=1 00:09:23.407 --rc geninfo_all_blocks=1 00:09:23.407 --rc geninfo_unexecuted_blocks=1 00:09:23.407 00:09:23.407 ' 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:23.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.407 --rc genhtml_branch_coverage=1 00:09:23.407 --rc genhtml_function_coverage=1 00:09:23.407 --rc genhtml_legend=1 00:09:23.407 --rc geninfo_all_blocks=1 00:09:23.407 --rc geninfo_unexecuted_blocks=1 00:09:23.407 00:09:23.407 ' 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:23.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.407 --rc genhtml_branch_coverage=1 00:09:23.407 --rc genhtml_function_coverage=1 00:09:23.407 --rc genhtml_legend=1 00:09:23.407 --rc geninfo_all_blocks=1 00:09:23.407 --rc geninfo_unexecuted_blocks=1 00:09:23.407 00:09:23.407 ' 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:23.407 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.408 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:23.408 Cannot find device "nvmf_init_br" 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:23.408 Cannot find device "nvmf_init_br2" 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:23.408 Cannot find device "nvmf_tgt_br" 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:23.408 Cannot find device "nvmf_tgt_br2" 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:23.408 Cannot find device "nvmf_init_br" 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:23.408 Cannot find device "nvmf_init_br2" 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:23.408 Cannot find device "nvmf_tgt_br" 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:23.408 Cannot find device "nvmf_tgt_br2" 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:23.408 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:23.408 Cannot find device "nvmf_br" 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:23.668 Cannot find device "nvmf_init_if" 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:23.668 Cannot find device "nvmf_init_if2" 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:23.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:23.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:23.668 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:23.927 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:23.927 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:09:23.927 00:09:23.927 --- 10.0.0.3 ping statistics --- 00:09:23.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.927 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:23.927 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:23.927 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:09:23.927 00:09:23.927 --- 10.0.0.4 ping statistics --- 00:09:23.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.927 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:23.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:09:23.927 00:09:23.927 --- 10.0.0.1 ping statistics --- 00:09:23.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.927 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:23.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:09:23.927 00:09:23.927 --- 10.0.0.2 ping statistics --- 00:09:23.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.927 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:23.927 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.928 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:23.928 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=63005 00:09:23.928 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 63005 00:09:23.928 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:23.928 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 63005 ']' 00:09:23.928 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.928 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.928 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.928 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.928 08:42:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:23.928 [2024-11-20 08:42:54.707115] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:23.928 [2024-11-20 08:42:54.707754] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.186 [2024-11-20 08:42:54.860307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.186 [2024-11-20 08:42:54.950195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.186 [2024-11-20 08:42:54.950287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.186 [2024-11-20 08:42:54.950302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.186 [2024-11-20 08:42:54.950313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.186 [2024-11-20 08:42:54.950323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.186 [2024-11-20 08:42:54.951890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.186 [2024-11-20 08:42:54.952003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.186 [2024-11-20 08:42:54.952003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.186 [2024-11-20 08:42:55.031594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.445 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.445 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:24.445 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.445 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.445 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:24.445 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.445 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:24.715 [2024-11-20 08:42:55.453446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.715 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.974 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:24.974 08:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.540 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:25.540 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:25.798 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:26.056 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b2a67fa2-0971-4092-92a2-66facec50f6f 00:09:26.056 08:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b2a67fa2-0971-4092-92a2-66facec50f6f lvol 20 00:09:26.315 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=db5ab9c7-970b-4d12-a90a-5eab27ee08cb 00:09:26.315 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:26.573 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 db5ab9c7-970b-4d12-a90a-5eab27ee08cb 00:09:27.141 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:27.141 [2024-11-20 08:42:57.979120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:27.141 08:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:27.707 08:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63078 00:09:27.707 08:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:27.707 08:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:28.642 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot db5ab9c7-970b-4d12-a90a-5eab27ee08cb MY_SNAPSHOT 00:09:28.957 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d7aa7b23-f239-43df-b5ce-960b580f7d76 00:09:28.957 08:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize db5ab9c7-970b-4d12-a90a-5eab27ee08cb 30 00:09:29.229 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d7aa7b23-f239-43df-b5ce-960b580f7d76 MY_CLONE 00:09:29.488 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7ea28b09-f1bd-41e1-9b25-f11cd632c145 00:09:29.488 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 7ea28b09-f1bd-41e1-9b25-f11cd632c145 00:09:30.057 08:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63078 00:09:38.178 Initializing NVMe Controllers 00:09:38.178 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:38.178 Controller IO queue size 128, less than required. 00:09:38.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:38.178 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:38.178 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:38.178 Initialization complete. Launching workers. 00:09:38.178 ======================================================== 00:09:38.178 Latency(us) 00:09:38.178 Device Information : IOPS MiB/s Average min max 00:09:38.178 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10082.30 39.38 12694.59 2124.84 108353.08 00:09:38.178 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9953.60 38.88 12862.84 2925.40 55924.81 00:09:38.178 ======================================================== 00:09:38.178 Total : 20035.90 78.27 12778.17 2124.84 108353.08 00:09:38.178 00:09:38.178 08:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:38.178 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete db5ab9c7-970b-4d12-a90a-5eab27ee08cb 00:09:38.437 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2a67fa2-0971-4092-92a2-66facec50f6f 00:09:38.696 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:38.696 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:38.696 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:38.696 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.696 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.955 rmmod nvme_tcp 00:09:38.955 rmmod nvme_fabrics 00:09:38.955 rmmod nvme_keyring 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 63005 ']' 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 63005 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 63005 ']' 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 63005 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63005 00:09:38.955 killing process with pid 63005 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.955 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63005' 00:09:38.956 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 63005 00:09:38.956 08:43:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 63005 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:39.215 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:09:39.474 00:09:39.474 real 0m16.333s 00:09:39.474 user 1m6.550s 00:09:39.474 sys 0m4.602s 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:39.474 ************************************ 00:09:39.474 END TEST nvmf_lvol 00:09:39.474 ************************************ 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.474 ************************************ 00:09:39.474 START TEST nvmf_lvs_grow 00:09:39.474 ************************************ 00:09:39.474 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:39.735 * Looking for test storage... 00:09:39.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.735 --rc genhtml_branch_coverage=1 00:09:39.735 --rc genhtml_function_coverage=1 00:09:39.735 --rc genhtml_legend=1 00:09:39.735 --rc geninfo_all_blocks=1 00:09:39.735 --rc geninfo_unexecuted_blocks=1 00:09:39.735 00:09:39.735 ' 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.735 --rc genhtml_branch_coverage=1 00:09:39.735 --rc genhtml_function_coverage=1 00:09:39.735 --rc genhtml_legend=1 00:09:39.735 --rc geninfo_all_blocks=1 00:09:39.735 --rc geninfo_unexecuted_blocks=1 00:09:39.735 00:09:39.735 ' 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.735 --rc genhtml_branch_coverage=1 00:09:39.735 --rc genhtml_function_coverage=1 00:09:39.735 --rc genhtml_legend=1 00:09:39.735 --rc geninfo_all_blocks=1 00:09:39.735 --rc geninfo_unexecuted_blocks=1 00:09:39.735 00:09:39.735 ' 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.735 --rc genhtml_branch_coverage=1 00:09:39.735 --rc genhtml_function_coverage=1 00:09:39.735 --rc genhtml_legend=1 00:09:39.735 --rc geninfo_all_blocks=1 00:09:39.735 --rc geninfo_unexecuted_blocks=1 00:09:39.735 00:09:39.735 ' 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:09:39.735 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.736 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:39.736 Cannot find device "nvmf_init_br" 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:39.736 Cannot find device "nvmf_init_br2" 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:39.736 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:39.996 Cannot find device "nvmf_tgt_br" 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:39.996 Cannot find device "nvmf_tgt_br2" 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:39.996 Cannot find device "nvmf_init_br" 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:39.996 Cannot find device "nvmf_init_br2" 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:39.996 Cannot find device "nvmf_tgt_br" 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:39.996 Cannot find device "nvmf_tgt_br2" 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:39.996 Cannot find device "nvmf_br" 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:39.996 Cannot find device "nvmf_init_if" 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:39.996 Cannot find device "nvmf_init_if2" 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:39.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:39.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:39.996 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:40.256 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:40.256 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:09:40.256 00:09:40.256 --- 10.0.0.3 ping statistics --- 00:09:40.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.256 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:40.256 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:40.256 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:09:40.256 00:09:40.256 --- 10.0.0.4 ping statistics --- 00:09:40.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.256 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:40.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:40.256 00:09:40.256 --- 10.0.0.1 ping statistics --- 00:09:40.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.256 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:40.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:09:40.256 00:09:40.256 --- 10.0.0.2 ping statistics --- 00:09:40.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.256 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:40.256 08:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63472 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63472 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63472 ']' 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.256 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:40.256 [2024-11-20 08:43:11.074890] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:40.256 [2024-11-20 08:43:11.075008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.515 [2024-11-20 08:43:11.220458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.515 [2024-11-20 08:43:11.301027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.515 [2024-11-20 08:43:11.301112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.515 [2024-11-20 08:43:11.301126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.515 [2024-11-20 08:43:11.301136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.515 [2024-11-20 08:43:11.301145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.515 [2024-11-20 08:43:11.301627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.515 [2024-11-20 08:43:11.378570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.773 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.773 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:40.773 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:40.773 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.773 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:40.773 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.773 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:41.032 [2024-11-20 08:43:11.804178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:41.032 ************************************ 00:09:41.032 START TEST lvs_grow_clean 00:09:41.032 ************************************ 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:41.032 08:43:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.291 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:41.291 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:41.859 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:41.859 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:41.859 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:41.859 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:41.859 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:42.117 08:43:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0c605a23-c031-4126-ab4c-70e3161d33f8 lvol 150 00:09:42.375 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f66f9b49-6c0a-4d8d-88c9-d615de76fb8c 00:09:42.375 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:42.375 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:42.633 [2024-11-20 08:43:13.336757] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:42.633 [2024-11-20 08:43:13.336933] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:42.633 true 00:09:42.633 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:42.633 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:42.893 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:42.893 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:43.152 08:43:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f66f9b49-6c0a-4d8d-88c9-d615de76fb8c 00:09:43.412 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:43.671 [2024-11-20 08:43:14.529538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:43.671 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:43.930 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63547 00:09:43.930 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:43.930 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:43.930 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63547 /var/tmp/bdevperf.sock 00:09:43.930 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63547 ']' 00:09:43.930 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:43.930 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:43.930 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:43.930 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.930 08:43:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 [2024-11-20 08:43:14.881891] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:44.189 [2024-11-20 08:43:14.882003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63547 ] 00:09:44.189 [2024-11-20 08:43:15.031957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.478 [2024-11-20 08:43:15.124503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.478 [2024-11-20 08:43:15.200159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.478 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.478 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:44.478 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:44.737 Nvme0n1 00:09:44.996 08:43:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:45.256 [ 00:09:45.256 { 00:09:45.256 "name": "Nvme0n1", 00:09:45.256 "aliases": [ 00:09:45.256 "f66f9b49-6c0a-4d8d-88c9-d615de76fb8c" 00:09:45.256 ], 00:09:45.256 "product_name": "NVMe disk", 00:09:45.256 "block_size": 4096, 00:09:45.256 "num_blocks": 38912, 00:09:45.256 "uuid": "f66f9b49-6c0a-4d8d-88c9-d615de76fb8c", 00:09:45.256 "numa_id": -1, 00:09:45.256 "assigned_rate_limits": { 00:09:45.256 "rw_ios_per_sec": 0, 00:09:45.256 "rw_mbytes_per_sec": 0, 00:09:45.256 "r_mbytes_per_sec": 0, 00:09:45.256 "w_mbytes_per_sec": 0 00:09:45.256 }, 00:09:45.256 "claimed": false, 00:09:45.256 "zoned": false, 00:09:45.256 "supported_io_types": { 00:09:45.256 "read": true, 00:09:45.256 "write": true, 00:09:45.256 "unmap": true, 00:09:45.256 "flush": true, 00:09:45.256 "reset": true, 00:09:45.256 "nvme_admin": true, 00:09:45.256 "nvme_io": true, 00:09:45.256 "nvme_io_md": false, 00:09:45.256 "write_zeroes": true, 00:09:45.256 "zcopy": false, 00:09:45.256 "get_zone_info": false, 00:09:45.256 "zone_management": false, 00:09:45.256 "zone_append": false, 00:09:45.256 "compare": true, 00:09:45.256 "compare_and_write": true, 00:09:45.256 "abort": true, 00:09:45.256 "seek_hole": false, 00:09:45.256 "seek_data": false, 00:09:45.256 "copy": true, 00:09:45.256 "nvme_iov_md": false 00:09:45.256 }, 00:09:45.256 "memory_domains": [ 00:09:45.256 { 00:09:45.256 "dma_device_id": "system", 00:09:45.256 "dma_device_type": 1 00:09:45.256 } 00:09:45.256 ], 00:09:45.256 "driver_specific": { 00:09:45.256 "nvme": [ 00:09:45.256 { 00:09:45.256 "trid": { 00:09:45.256 "trtype": "TCP", 00:09:45.256 "adrfam": "IPv4", 00:09:45.256 "traddr": "10.0.0.3", 00:09:45.256 "trsvcid": "4420", 00:09:45.256 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:45.256 }, 00:09:45.256 "ctrlr_data": { 00:09:45.256 "cntlid": 1, 00:09:45.256 "vendor_id": "0x8086", 00:09:45.256 "model_number": "SPDK bdev Controller", 00:09:45.256 "serial_number": "SPDK0", 00:09:45.256 "firmware_revision": "25.01", 00:09:45.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:45.256 "oacs": { 00:09:45.256 "security": 0, 00:09:45.256 "format": 0, 00:09:45.256 "firmware": 0, 00:09:45.256 "ns_manage": 0 00:09:45.256 }, 00:09:45.256 "multi_ctrlr": true, 00:09:45.256 "ana_reporting": false 00:09:45.256 }, 00:09:45.256 "vs": { 00:09:45.256 "nvme_version": "1.3" 00:09:45.256 }, 00:09:45.256 "ns_data": { 00:09:45.256 "id": 1, 00:09:45.256 "can_share": true 00:09:45.256 } 00:09:45.256 } 00:09:45.256 ], 00:09:45.256 "mp_policy": "active_passive" 00:09:45.256 } 00:09:45.256 } 00:09:45.256 ] 00:09:45.257 08:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:45.257 08:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63567 00:09:45.257 08:43:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:45.516 Running I/O for 10 seconds... 00:09:46.452 Latency(us) 00:09:46.452 [2024-11-20T08:43:17.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.452 Nvme0n1 : 1.00 6214.00 24.27 0.00 0.00 0.00 0.00 0.00 00:09:46.452 [2024-11-20T08:43:17.367Z] =================================================================================================================== 00:09:46.452 [2024-11-20T08:43:17.367Z] Total : 6214.00 24.27 0.00 0.00 0.00 0.00 0.00 00:09:46.452 00:09:47.388 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:47.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.388 Nvme0n1 : 2.00 6091.50 23.79 0.00 0.00 0.00 0.00 0.00 00:09:47.388 [2024-11-20T08:43:18.303Z] =================================================================================================================== 00:09:47.388 [2024-11-20T08:43:18.303Z] Total : 6091.50 23.79 0.00 0.00 0.00 0.00 0.00 00:09:47.388 00:09:47.646 true 00:09:47.647 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:47.647 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:47.905 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:47.905 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:47.905 08:43:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63567 00:09:48.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.479 Nvme0n1 : 3.00 6135.33 23.97 0.00 0.00 0.00 0.00 0.00 00:09:48.479 [2024-11-20T08:43:19.394Z] =================================================================================================================== 00:09:48.479 [2024-11-20T08:43:19.394Z] Total : 6135.33 23.97 0.00 0.00 0.00 0.00 0.00 00:09:48.479 00:09:49.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.414 Nvme0n1 : 4.00 6062.00 23.68 0.00 0.00 0.00 0.00 0.00 00:09:49.414 [2024-11-20T08:43:20.329Z] =================================================================================================================== 00:09:49.414 [2024-11-20T08:43:20.329Z] Total : 6062.00 23.68 0.00 0.00 0.00 0.00 0.00 00:09:49.414 00:09:50.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.352 Nvme0n1 : 5.00 6043.40 23.61 0.00 0.00 0.00 0.00 0.00 00:09:50.352 [2024-11-20T08:43:21.267Z] =================================================================================================================== 00:09:50.352 [2024-11-20T08:43:21.267Z] Total : 6043.40 23.61 0.00 0.00 0.00 0.00 0.00 00:09:50.352 00:09:51.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.288 Nvme0n1 : 6.00 5925.17 23.15 0.00 0.00 0.00 0.00 0.00 00:09:51.288 [2024-11-20T08:43:22.203Z] =================================================================================================================== 00:09:51.288 [2024-11-20T08:43:22.203Z] Total : 5925.17 23.15 0.00 0.00 0.00 0.00 0.00 00:09:51.288 00:09:52.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.665 Nvme0n1 : 7.00 5895.14 23.03 0.00 0.00 0.00 0.00 0.00 00:09:52.665 [2024-11-20T08:43:23.580Z] =================================================================================================================== 00:09:52.665 [2024-11-20T08:43:23.580Z] Total : 5895.14 23.03 0.00 0.00 0.00 0.00 0.00 00:09:52.665 00:09:53.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.601 Nvme0n1 : 8.00 5872.62 22.94 0.00 0.00 0.00 0.00 0.00 00:09:53.601 [2024-11-20T08:43:24.516Z] =================================================================================================================== 00:09:53.601 [2024-11-20T08:43:24.516Z] Total : 5872.62 22.94 0.00 0.00 0.00 0.00 0.00 00:09:53.601 00:09:54.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.613 Nvme0n1 : 9.00 5883.33 22.98 0.00 0.00 0.00 0.00 0.00 00:09:54.613 [2024-11-20T08:43:25.528Z] =================================================================================================================== 00:09:54.613 [2024-11-20T08:43:25.528Z] Total : 5883.33 22.98 0.00 0.00 0.00 0.00 0.00 00:09:54.613 00:09:55.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.549 Nvme0n1 : 10.00 5879.20 22.97 0.00 0.00 0.00 0.00 0.00 00:09:55.549 [2024-11-20T08:43:26.464Z] =================================================================================================================== 00:09:55.549 [2024-11-20T08:43:26.464Z] Total : 5879.20 22.97 0.00 0.00 0.00 0.00 0.00 00:09:55.549 00:09:55.549 00:09:55.549 Latency(us) 00:09:55.549 [2024-11-20T08:43:26.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.549 Nvme0n1 : 10.01 5886.33 22.99 0.00 0.00 21739.49 9949.56 129642.12 00:09:55.549 [2024-11-20T08:43:26.464Z] =================================================================================================================== 00:09:55.549 [2024-11-20T08:43:26.464Z] Total : 5886.33 22.99 0.00 0.00 21739.49 9949.56 129642.12 00:09:55.549 { 00:09:55.549 "results": [ 00:09:55.549 { 00:09:55.549 "job": "Nvme0n1", 00:09:55.549 "core_mask": "0x2", 00:09:55.549 "workload": "randwrite", 00:09:55.549 "status": "finished", 00:09:55.549 "queue_depth": 128, 00:09:55.549 "io_size": 4096, 00:09:55.549 "runtime": 10.009633, 00:09:55.549 "iops": 5886.329698601337, 00:09:55.549 "mibps": 22.993475385161474, 00:09:55.549 "io_failed": 0, 00:09:55.549 "io_timeout": 0, 00:09:55.549 "avg_latency_us": 21739.48903289514, 00:09:55.549 "min_latency_us": 9949.556363636364, 00:09:55.549 "max_latency_us": 129642.12363636364 00:09:55.549 } 00:09:55.549 ], 00:09:55.549 "core_count": 1 00:09:55.549 } 00:09:55.549 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63547 00:09:55.549 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63547 ']' 00:09:55.549 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63547 00:09:55.549 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:55.549 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.549 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63547 00:09:55.549 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:55.549 killing process with pid 63547 00:09:55.549 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:55.549 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63547' 00:09:55.549 Received shutdown signal, test time was about 10.000000 seconds 00:09:55.549 00:09:55.549 Latency(us) 00:09:55.549 [2024-11-20T08:43:26.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.549 [2024-11-20T08:43:26.464Z] =================================================================================================================== 00:09:55.549 [2024-11-20T08:43:26.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:55.549 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63547 00:09:55.549 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63547 00:09:55.808 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:56.067 08:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:56.326 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:56.326 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:56.894 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:56.894 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:56.894 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:56.894 [2024-11-20 08:43:27.761585] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:56.894 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:56.894 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:56.894 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:56.894 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.894 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.894 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.153 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.153 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.153 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.153 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.153 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:57.153 08:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:57.153 request: 00:09:57.153 { 00:09:57.153 "uuid": "0c605a23-c031-4126-ab4c-70e3161d33f8", 00:09:57.153 "method": "bdev_lvol_get_lvstores", 00:09:57.153 "req_id": 1 00:09:57.153 } 00:09:57.153 Got JSON-RPC error response 00:09:57.153 response: 00:09:57.153 { 00:09:57.153 "code": -19, 00:09:57.153 "message": "No such device" 00:09:57.153 } 00:09:57.153 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:57.153 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.153 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.153 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.153 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:57.720 aio_bdev 00:09:57.720 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f66f9b49-6c0a-4d8d-88c9-d615de76fb8c 00:09:57.721 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f66f9b49-6c0a-4d8d-88c9-d615de76fb8c 00:09:57.721 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.721 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:57.721 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.721 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.721 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:57.990 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f66f9b49-6c0a-4d8d-88c9-d615de76fb8c -t 2000 00:09:58.259 [ 00:09:58.259 { 00:09:58.259 "name": "f66f9b49-6c0a-4d8d-88c9-d615de76fb8c", 00:09:58.259 "aliases": [ 00:09:58.259 "lvs/lvol" 00:09:58.259 ], 00:09:58.259 "product_name": "Logical Volume", 00:09:58.259 "block_size": 4096, 00:09:58.259 "num_blocks": 38912, 00:09:58.259 "uuid": "f66f9b49-6c0a-4d8d-88c9-d615de76fb8c", 00:09:58.259 "assigned_rate_limits": { 00:09:58.259 "rw_ios_per_sec": 0, 00:09:58.259 "rw_mbytes_per_sec": 0, 00:09:58.259 "r_mbytes_per_sec": 0, 00:09:58.259 "w_mbytes_per_sec": 0 00:09:58.259 }, 00:09:58.259 "claimed": false, 00:09:58.259 "zoned": false, 00:09:58.259 "supported_io_types": { 00:09:58.259 "read": true, 00:09:58.259 "write": true, 00:09:58.259 "unmap": true, 00:09:58.259 "flush": false, 00:09:58.259 "reset": true, 00:09:58.259 "nvme_admin": false, 00:09:58.259 "nvme_io": false, 00:09:58.259 "nvme_io_md": false, 00:09:58.259 "write_zeroes": true, 00:09:58.259 "zcopy": false, 00:09:58.259 "get_zone_info": false, 00:09:58.259 "zone_management": false, 00:09:58.259 "zone_append": false, 00:09:58.259 "compare": false, 00:09:58.259 "compare_and_write": false, 00:09:58.259 "abort": false, 00:09:58.259 "seek_hole": true, 00:09:58.259 "seek_data": true, 00:09:58.259 "copy": false, 00:09:58.259 "nvme_iov_md": false 00:09:58.259 }, 00:09:58.259 "driver_specific": { 00:09:58.259 "lvol": { 00:09:58.259 "lvol_store_uuid": "0c605a23-c031-4126-ab4c-70e3161d33f8", 00:09:58.259 "base_bdev": "aio_bdev", 00:09:58.259 "thin_provision": false, 00:09:58.259 "num_allocated_clusters": 38, 00:09:58.259 "snapshot": false, 00:09:58.259 "clone": false, 00:09:58.259 "esnap_clone": false 00:09:58.259 } 00:09:58.259 } 00:09:58.259 } 00:09:58.259 ] 00:09:58.259 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:58.259 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:58.259 08:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:58.532 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:58.532 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:58.532 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:58.790 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:58.790 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f66f9b49-6c0a-4d8d-88c9-d615de76fb8c 00:09:59.357 08:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c605a23-c031-4126-ab4c-70e3161d33f8 00:09:59.616 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:59.875 08:43:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:00.443 ************************************ 00:10:00.443 END TEST lvs_grow_clean 00:10:00.443 ************************************ 00:10:00.443 00:10:00.443 real 0m19.246s 00:10:00.443 user 0m17.966s 00:10:00.443 sys 0m2.882s 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:00.443 ************************************ 00:10:00.443 START TEST lvs_grow_dirty 00:10:00.443 ************************************ 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:00.443 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:00.703 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:00.703 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:00.961 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:00.962 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:00.962 08:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:01.221 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:01.221 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:01.221 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e lvol 150 00:10:01.480 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=36d22acd-d75b-4985-80f7-7295ed44306c 00:10:01.480 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:01.481 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:01.772 [2024-11-20 08:43:32.642891] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:01.772 [2024-11-20 08:43:32.643041] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:01.772 true 00:10:02.046 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:02.046 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:02.046 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:02.046 08:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:02.614 08:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 36d22acd-d75b-4985-80f7-7295ed44306c 00:10:02.614 08:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:02.873 [2024-11-20 08:43:33.759663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:02.873 08:43:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:03.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:03.444 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63828 00:10:03.444 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:03.444 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:03.445 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63828 /var/tmp/bdevperf.sock 00:10:03.445 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63828 ']' 00:10:03.445 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:03.445 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.445 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:03.445 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.445 08:43:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:03.445 [2024-11-20 08:43:34.158211] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:03.445 [2024-11-20 08:43:34.158587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63828 ] 00:10:03.445 [2024-11-20 08:43:34.308018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.704 [2024-11-20 08:43:34.393970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.704 [2024-11-20 08:43:34.470146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:04.272 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.272 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:04.272 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:04.840 Nvme0n1 00:10:04.840 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:05.099 [ 00:10:05.099 { 00:10:05.099 "name": "Nvme0n1", 00:10:05.099 "aliases": [ 00:10:05.099 "36d22acd-d75b-4985-80f7-7295ed44306c" 00:10:05.099 ], 00:10:05.099 "product_name": "NVMe disk", 00:10:05.099 "block_size": 4096, 00:10:05.099 "num_blocks": 38912, 00:10:05.099 "uuid": "36d22acd-d75b-4985-80f7-7295ed44306c", 00:10:05.099 "numa_id": -1, 00:10:05.099 "assigned_rate_limits": { 00:10:05.099 "rw_ios_per_sec": 0, 00:10:05.099 "rw_mbytes_per_sec": 0, 00:10:05.099 "r_mbytes_per_sec": 0, 00:10:05.099 "w_mbytes_per_sec": 0 00:10:05.099 }, 00:10:05.099 "claimed": false, 00:10:05.099 "zoned": false, 00:10:05.099 "supported_io_types": { 00:10:05.099 "read": true, 00:10:05.099 "write": true, 00:10:05.099 "unmap": true, 00:10:05.099 "flush": true, 00:10:05.099 "reset": true, 00:10:05.099 "nvme_admin": true, 00:10:05.099 "nvme_io": true, 00:10:05.099 "nvme_io_md": false, 00:10:05.099 "write_zeroes": true, 00:10:05.099 "zcopy": false, 00:10:05.099 "get_zone_info": false, 00:10:05.099 "zone_management": false, 00:10:05.099 "zone_append": false, 00:10:05.099 "compare": true, 00:10:05.099 "compare_and_write": true, 00:10:05.099 "abort": true, 00:10:05.099 "seek_hole": false, 00:10:05.099 "seek_data": false, 00:10:05.099 "copy": true, 00:10:05.099 "nvme_iov_md": false 00:10:05.099 }, 00:10:05.099 "memory_domains": [ 00:10:05.099 { 00:10:05.099 "dma_device_id": "system", 00:10:05.099 "dma_device_type": 1 00:10:05.099 } 00:10:05.099 ], 00:10:05.099 "driver_specific": { 00:10:05.099 "nvme": [ 00:10:05.099 { 00:10:05.099 "trid": { 00:10:05.099 "trtype": "TCP", 00:10:05.099 "adrfam": "IPv4", 00:10:05.099 "traddr": "10.0.0.3", 00:10:05.099 "trsvcid": "4420", 00:10:05.099 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:05.099 }, 00:10:05.099 "ctrlr_data": { 00:10:05.099 "cntlid": 1, 00:10:05.099 "vendor_id": "0x8086", 00:10:05.099 "model_number": "SPDK bdev Controller", 00:10:05.099 "serial_number": "SPDK0", 00:10:05.099 "firmware_revision": "25.01", 00:10:05.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:05.099 "oacs": { 00:10:05.099 "security": 0, 00:10:05.099 "format": 0, 00:10:05.099 "firmware": 0, 00:10:05.099 "ns_manage": 0 00:10:05.099 }, 00:10:05.099 "multi_ctrlr": true, 00:10:05.099 "ana_reporting": false 00:10:05.099 }, 00:10:05.099 "vs": { 00:10:05.099 "nvme_version": "1.3" 00:10:05.099 }, 00:10:05.099 "ns_data": { 00:10:05.099 "id": 1, 00:10:05.099 "can_share": true 00:10:05.099 } 00:10:05.099 } 00:10:05.099 ], 00:10:05.099 "mp_policy": "active_passive" 00:10:05.099 } 00:10:05.099 } 00:10:05.099 ] 00:10:05.099 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:05.099 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63857 00:10:05.099 08:43:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:05.099 Running I/O for 10 seconds... 00:10:06.036 Latency(us) 00:10:06.036 [2024-11-20T08:43:36.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.036 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:06.037 [2024-11-20T08:43:36.952Z] =================================================================================================================== 00:10:06.037 [2024-11-20T08:43:36.952Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:06.037 00:10:06.971 08:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:07.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.230 Nvme0n1 : 2.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:07.230 [2024-11-20T08:43:38.145Z] =================================================================================================================== 00:10:07.230 [2024-11-20T08:43:38.145Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:07.230 00:10:07.230 true 00:10:07.230 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:07.230 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:07.798 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:07.798 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:07.798 08:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63857 00:10:08.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.057 Nvme0n1 : 3.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:10:08.057 [2024-11-20T08:43:38.972Z] =================================================================================================================== 00:10:08.057 [2024-11-20T08:43:38.972Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:10:08.057 00:10:08.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.992 Nvme0n1 : 4.00 6254.75 24.43 0.00 0.00 0.00 0.00 0.00 00:10:08.992 [2024-11-20T08:43:39.907Z] =================================================================================================================== 00:10:08.992 [2024-11-20T08:43:39.907Z] Total : 6254.75 24.43 0.00 0.00 0.00 0.00 0.00 00:10:08.992 00:10:10.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.368 Nvme0n1 : 5.00 6097.80 23.82 0.00 0.00 0.00 0.00 0.00 00:10:10.368 [2024-11-20T08:43:41.283Z] =================================================================================================================== 00:10:10.368 [2024-11-20T08:43:41.283Z] Total : 6097.80 23.82 0.00 0.00 0.00 0.00 0.00 00:10:10.368 00:10:11.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.018 Nvme0n1 : 6.00 6076.33 23.74 0.00 0.00 0.00 0.00 0.00 00:10:11.018 [2024-11-20T08:43:41.933Z] =================================================================================================================== 00:10:11.018 [2024-11-20T08:43:41.933Z] Total : 6076.33 23.74 0.00 0.00 0.00 0.00 0.00 00:10:11.018 00:10:12.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.397 Nvme0n1 : 7.00 6079.14 23.75 0.00 0.00 0.00 0.00 0.00 00:10:12.397 [2024-11-20T08:43:43.312Z] =================================================================================================================== 00:10:12.397 [2024-11-20T08:43:43.312Z] Total : 6079.14 23.75 0.00 0.00 0.00 0.00 0.00 00:10:12.397 00:10:13.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.333 Nvme0n1 : 8.00 6081.25 23.75 0.00 0.00 0.00 0.00 0.00 00:10:13.333 [2024-11-20T08:43:44.248Z] =================================================================================================================== 00:10:13.333 [2024-11-20T08:43:44.248Z] Total : 6081.25 23.75 0.00 0.00 0.00 0.00 0.00 00:10:13.333 00:10:14.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.270 Nvme0n1 : 9.00 6068.78 23.71 0.00 0.00 0.00 0.00 0.00 00:10:14.270 [2024-11-20T08:43:45.185Z] =================================================================================================================== 00:10:14.270 [2024-11-20T08:43:45.185Z] Total : 6068.78 23.71 0.00 0.00 0.00 0.00 0.00 00:10:14.270 00:10:15.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.227 Nvme0n1 : 10.00 6058.80 23.67 0.00 0.00 0.00 0.00 0.00 00:10:15.227 [2024-11-20T08:43:46.142Z] =================================================================================================================== 00:10:15.227 [2024-11-20T08:43:46.142Z] Total : 6058.80 23.67 0.00 0.00 0.00 0.00 0.00 00:10:15.227 00:10:15.227 00:10:15.227 Latency(us) 00:10:15.227 [2024-11-20T08:43:46.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.227 Nvme0n1 : 10.02 6061.53 23.68 0.00 0.00 21110.51 12809.31 111053.73 00:10:15.227 [2024-11-20T08:43:46.142Z] =================================================================================================================== 00:10:15.227 [2024-11-20T08:43:46.142Z] Total : 6061.53 23.68 0.00 0.00 21110.51 12809.31 111053.73 00:10:15.227 { 00:10:15.227 "results": [ 00:10:15.227 { 00:10:15.227 "job": "Nvme0n1", 00:10:15.227 "core_mask": "0x2", 00:10:15.227 "workload": "randwrite", 00:10:15.227 "status": "finished", 00:10:15.227 "queue_depth": 128, 00:10:15.227 "io_size": 4096, 00:10:15.227 "runtime": 10.016611, 00:10:15.227 "iops": 6061.531190539395, 00:10:15.227 "mibps": 23.67785621304451, 00:10:15.227 "io_failed": 0, 00:10:15.227 "io_timeout": 0, 00:10:15.227 "avg_latency_us": 21110.514987572544, 00:10:15.227 "min_latency_us": 12809.309090909092, 00:10:15.227 "max_latency_us": 111053.73090909091 00:10:15.227 } 00:10:15.227 ], 00:10:15.227 "core_count": 1 00:10:15.227 } 00:10:15.227 08:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63828 00:10:15.227 08:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63828 ']' 00:10:15.227 08:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63828 00:10:15.227 08:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:15.227 08:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.227 08:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63828 00:10:15.227 killing process with pid 63828 00:10:15.227 Received shutdown signal, test time was about 10.000000 seconds 00:10:15.227 00:10:15.227 Latency(us) 00:10:15.227 [2024-11-20T08:43:46.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.227 [2024-11-20T08:43:46.142Z] =================================================================================================================== 00:10:15.227 [2024-11-20T08:43:46.142Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:15.227 08:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:15.227 08:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:15.227 08:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63828' 00:10:15.227 08:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63828 00:10:15.227 08:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63828 00:10:15.486 08:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:15.745 08:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:16.004 08:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:16.004 08:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63472 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63472 00:10:16.570 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63472 Killed "${NVMF_APP[@]}" "$@" 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63995 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63995 00:10:16.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63995 ']' 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.570 08:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:16.570 [2024-11-20 08:43:47.330026] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:16.570 [2024-11-20 08:43:47.330402] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.570 [2024-11-20 08:43:47.478844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.829 [2024-11-20 08:43:47.554975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.829 [2024-11-20 08:43:47.555312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.829 [2024-11-20 08:43:47.555480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.829 [2024-11-20 08:43:47.555640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.829 [2024-11-20 08:43:47.555684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.829 [2024-11-20 08:43:47.556226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.829 [2024-11-20 08:43:47.630066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.764 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.764 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:17.764 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:17.764 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.764 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:17.764 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.764 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:18.022 [2024-11-20 08:43:48.702526] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:18.022 [2024-11-20 08:43:48.702881] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:18.022 [2024-11-20 08:43:48.703230] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:18.023 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:18.023 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 36d22acd-d75b-4985-80f7-7295ed44306c 00:10:18.023 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=36d22acd-d75b-4985-80f7-7295ed44306c 00:10:18.023 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.023 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:18.023 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.023 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.023 08:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:18.282 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36d22acd-d75b-4985-80f7-7295ed44306c -t 2000 00:10:18.541 [ 00:10:18.541 { 00:10:18.541 "name": "36d22acd-d75b-4985-80f7-7295ed44306c", 00:10:18.541 "aliases": [ 00:10:18.541 "lvs/lvol" 00:10:18.541 ], 00:10:18.541 "product_name": "Logical Volume", 00:10:18.541 "block_size": 4096, 00:10:18.541 "num_blocks": 38912, 00:10:18.541 "uuid": "36d22acd-d75b-4985-80f7-7295ed44306c", 00:10:18.541 "assigned_rate_limits": { 00:10:18.541 "rw_ios_per_sec": 0, 00:10:18.541 "rw_mbytes_per_sec": 0, 00:10:18.541 "r_mbytes_per_sec": 0, 00:10:18.541 "w_mbytes_per_sec": 0 00:10:18.541 }, 00:10:18.541 "claimed": false, 00:10:18.541 "zoned": false, 00:10:18.541 "supported_io_types": { 00:10:18.541 "read": true, 00:10:18.541 "write": true, 00:10:18.541 "unmap": true, 00:10:18.541 "flush": false, 00:10:18.541 "reset": true, 00:10:18.541 "nvme_admin": false, 00:10:18.541 "nvme_io": false, 00:10:18.541 "nvme_io_md": false, 00:10:18.541 "write_zeroes": true, 00:10:18.541 "zcopy": false, 00:10:18.541 "get_zone_info": false, 00:10:18.541 "zone_management": false, 00:10:18.541 "zone_append": false, 00:10:18.541 "compare": false, 00:10:18.541 "compare_and_write": false, 00:10:18.541 "abort": false, 00:10:18.541 "seek_hole": true, 00:10:18.541 "seek_data": true, 00:10:18.541 "copy": false, 00:10:18.541 "nvme_iov_md": false 00:10:18.541 }, 00:10:18.541 "driver_specific": { 00:10:18.541 "lvol": { 00:10:18.541 "lvol_store_uuid": "c3e25c21-ee1b-4115-8f4e-676c9d68ac6e", 00:10:18.541 "base_bdev": "aio_bdev", 00:10:18.541 "thin_provision": false, 00:10:18.541 "num_allocated_clusters": 38, 00:10:18.541 "snapshot": false, 00:10:18.541 "clone": false, 00:10:18.541 "esnap_clone": false 00:10:18.541 } 00:10:18.541 } 00:10:18.541 } 00:10:18.541 ] 00:10:18.541 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:18.541 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:18.541 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:18.799 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:18.799 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:18.799 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:19.083 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:19.083 08:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:19.650 [2024-11-20 08:43:50.259856] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:19.650 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:19.650 request: 00:10:19.650 { 00:10:19.650 "uuid": "c3e25c21-ee1b-4115-8f4e-676c9d68ac6e", 00:10:19.650 "method": "bdev_lvol_get_lvstores", 00:10:19.650 "req_id": 1 00:10:19.650 } 00:10:19.650 Got JSON-RPC error response 00:10:19.650 response: 00:10:19.650 { 00:10:19.650 "code": -19, 00:10:19.650 "message": "No such device" 00:10:19.650 } 00:10:19.909 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:19.909 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:19.909 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:19.909 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:19.909 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:20.168 aio_bdev 00:10:20.168 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 36d22acd-d75b-4985-80f7-7295ed44306c 00:10:20.168 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=36d22acd-d75b-4985-80f7-7295ed44306c 00:10:20.168 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.168 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:20.168 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.168 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.168 08:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:20.426 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36d22acd-d75b-4985-80f7-7295ed44306c -t 2000 00:10:20.684 [ 00:10:20.684 { 00:10:20.684 "name": "36d22acd-d75b-4985-80f7-7295ed44306c", 00:10:20.684 "aliases": [ 00:10:20.684 "lvs/lvol" 00:10:20.684 ], 00:10:20.684 "product_name": "Logical Volume", 00:10:20.684 "block_size": 4096, 00:10:20.684 "num_blocks": 38912, 00:10:20.684 "uuid": "36d22acd-d75b-4985-80f7-7295ed44306c", 00:10:20.684 "assigned_rate_limits": { 00:10:20.684 "rw_ios_per_sec": 0, 00:10:20.684 "rw_mbytes_per_sec": 0, 00:10:20.684 "r_mbytes_per_sec": 0, 00:10:20.684 "w_mbytes_per_sec": 0 00:10:20.684 }, 00:10:20.684 "claimed": false, 00:10:20.684 "zoned": false, 00:10:20.684 "supported_io_types": { 00:10:20.684 "read": true, 00:10:20.684 "write": true, 00:10:20.684 "unmap": true, 00:10:20.684 "flush": false, 00:10:20.684 "reset": true, 00:10:20.684 "nvme_admin": false, 00:10:20.684 "nvme_io": false, 00:10:20.684 "nvme_io_md": false, 00:10:20.684 "write_zeroes": true, 00:10:20.684 "zcopy": false, 00:10:20.684 "get_zone_info": false, 00:10:20.684 "zone_management": false, 00:10:20.684 "zone_append": false, 00:10:20.684 "compare": false, 00:10:20.684 "compare_and_write": false, 00:10:20.684 "abort": false, 00:10:20.684 "seek_hole": true, 00:10:20.684 "seek_data": true, 00:10:20.684 "copy": false, 00:10:20.684 "nvme_iov_md": false 00:10:20.684 }, 00:10:20.684 "driver_specific": { 00:10:20.684 "lvol": { 00:10:20.684 "lvol_store_uuid": "c3e25c21-ee1b-4115-8f4e-676c9d68ac6e", 00:10:20.684 "base_bdev": "aio_bdev", 00:10:20.684 "thin_provision": false, 00:10:20.684 "num_allocated_clusters": 38, 00:10:20.684 "snapshot": false, 00:10:20.684 "clone": false, 00:10:20.684 "esnap_clone": false 00:10:20.684 } 00:10:20.684 } 00:10:20.684 } 00:10:20.684 ] 00:10:20.684 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:20.684 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:20.684 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:20.942 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:20.942 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:20.942 08:43:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:21.202 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:21.202 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 36d22acd-d75b-4985-80f7-7295ed44306c 00:10:21.462 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3e25c21-ee1b-4115-8f4e-676c9d68ac6e 00:10:21.721 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:21.979 08:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.546 ************************************ 00:10:22.546 END TEST lvs_grow_dirty 00:10:22.546 ************************************ 00:10:22.546 00:10:22.546 real 0m22.182s 00:10:22.546 user 0m44.957s 00:10:22.546 sys 0m9.064s 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:22.546 nvmf_trace.0 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:22.546 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:23.115 rmmod nvme_tcp 00:10:23.115 rmmod nvme_fabrics 00:10:23.115 rmmod nvme_keyring 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63995 ']' 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63995 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63995 ']' 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63995 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63995 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.115 killing process with pid 63995 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63995' 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63995 00:10:23.115 08:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63995 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:23.374 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:23.633 00:10:23.633 real 0m43.997s 00:10:23.633 user 1m10.251s 00:10:23.633 sys 0m13.031s 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:23.633 ************************************ 00:10:23.633 END TEST nvmf_lvs_grow 00:10:23.633 ************************************ 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:23.633 ************************************ 00:10:23.633 START TEST nvmf_bdev_io_wait 00:10:23.633 ************************************ 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:23.633 * Looking for test storage... 00:10:23.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.633 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:23.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.893 --rc genhtml_branch_coverage=1 00:10:23.893 --rc genhtml_function_coverage=1 00:10:23.893 --rc genhtml_legend=1 00:10:23.893 --rc geninfo_all_blocks=1 00:10:23.893 --rc geninfo_unexecuted_blocks=1 00:10:23.893 00:10:23.893 ' 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:23.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.893 --rc genhtml_branch_coverage=1 00:10:23.893 --rc genhtml_function_coverage=1 00:10:23.893 --rc genhtml_legend=1 00:10:23.893 --rc geninfo_all_blocks=1 00:10:23.893 --rc geninfo_unexecuted_blocks=1 00:10:23.893 00:10:23.893 ' 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:23.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.893 --rc genhtml_branch_coverage=1 00:10:23.893 --rc genhtml_function_coverage=1 00:10:23.893 --rc genhtml_legend=1 00:10:23.893 --rc geninfo_all_blocks=1 00:10:23.893 --rc geninfo_unexecuted_blocks=1 00:10:23.893 00:10:23.893 ' 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:23.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.893 --rc genhtml_branch_coverage=1 00:10:23.893 --rc genhtml_function_coverage=1 00:10:23.893 --rc genhtml_legend=1 00:10:23.893 --rc geninfo_all_blocks=1 00:10:23.893 --rc geninfo_unexecuted_blocks=1 00:10:23.893 00:10:23.893 ' 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.893 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.894 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:23.894 Cannot find device "nvmf_init_br" 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:23.894 Cannot find device "nvmf_init_br2" 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:23.894 Cannot find device "nvmf_tgt_br" 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:23.894 Cannot find device "nvmf_tgt_br2" 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:23.894 Cannot find device "nvmf_init_br" 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:23.894 Cannot find device "nvmf_init_br2" 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:23.894 Cannot find device "nvmf_tgt_br" 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:23.894 Cannot find device "nvmf_tgt_br2" 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:23.894 Cannot find device "nvmf_br" 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:23.894 Cannot find device "nvmf_init_if" 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:23.894 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:24.153 Cannot find device "nvmf_init_if2" 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:24.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:24.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:24.153 08:43:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:24.153 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:24.153 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:10:24.153 00:10:24.153 --- 10.0.0.3 ping statistics --- 00:10:24.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.153 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:24.153 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:24.153 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:10:24.153 00:10:24.153 --- 10.0.0.4 ping statistics --- 00:10:24.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.153 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:24.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:10:24.153 00:10:24.153 --- 10.0.0.1 ping statistics --- 00:10:24.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.153 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:24.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:10:24.153 00:10:24.153 --- 10.0.0.2 ping statistics --- 00:10:24.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.153 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:24.153 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64383 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64383 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64383 ']' 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.413 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.413 [2024-11-20 08:43:55.127350] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:24.413 [2024-11-20 08:43:55.128080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.413 [2024-11-20 08:43:55.276940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.672 [2024-11-20 08:43:55.354282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.672 [2024-11-20 08:43:55.354358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.672 [2024-11-20 08:43:55.354369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.672 [2024-11-20 08:43:55.354378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.672 [2024-11-20 08:43:55.354385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.672 [2024-11-20 08:43:55.355727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.672 [2024-11-20 08:43:55.355892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.672 [2024-11-20 08:43:55.356061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.672 [2024-11-20 08:43:55.356063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.672 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.673 [2024-11-20 08:43:55.528931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:24.673 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.673 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:24.673 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.673 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.673 [2024-11-20 08:43:55.546321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.673 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.673 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:24.673 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.673 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.673 Malloc0 00:10:24.673 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.673 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.932 [2024-11-20 08:43:55.608280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64405 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64407 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64408 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:24.932 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.932 { 00:10:24.932 "params": { 00:10:24.932 "name": "Nvme$subsystem", 00:10:24.932 "trtype": "$TEST_TRANSPORT", 00:10:24.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.932 "adrfam": "ipv4", 00:10:24.932 "trsvcid": "$NVMF_PORT", 00:10:24.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.932 "hdgst": ${hdgst:-false}, 00:10:24.932 "ddgst": ${ddgst:-false} 00:10:24.932 }, 00:10:24.932 "method": "bdev_nvme_attach_controller" 00:10:24.932 } 00:10:24.933 EOF 00:10:24.933 )") 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64411 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.933 { 00:10:24.933 "params": { 00:10:24.933 "name": "Nvme$subsystem", 00:10:24.933 "trtype": "$TEST_TRANSPORT", 00:10:24.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.933 "adrfam": "ipv4", 00:10:24.933 "trsvcid": "$NVMF_PORT", 00:10:24.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.933 "hdgst": ${hdgst:-false}, 00:10:24.933 "ddgst": ${ddgst:-false} 00:10:24.933 }, 00:10:24.933 "method": "bdev_nvme_attach_controller" 00:10:24.933 } 00:10:24.933 EOF 00:10:24.933 )") 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.933 { 00:10:24.933 "params": { 00:10:24.933 "name": "Nvme$subsystem", 00:10:24.933 "trtype": "$TEST_TRANSPORT", 00:10:24.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.933 "adrfam": "ipv4", 00:10:24.933 "trsvcid": "$NVMF_PORT", 00:10:24.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.933 "hdgst": ${hdgst:-false}, 00:10:24.933 "ddgst": ${ddgst:-false} 00:10:24.933 }, 00:10:24.933 "method": "bdev_nvme_attach_controller" 00:10:24.933 } 00:10:24.933 EOF 00:10:24.933 )") 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.933 { 00:10:24.933 "params": { 00:10:24.933 "name": "Nvme$subsystem", 00:10:24.933 "trtype": "$TEST_TRANSPORT", 00:10:24.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.933 "adrfam": "ipv4", 00:10:24.933 "trsvcid": "$NVMF_PORT", 00:10:24.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.933 "hdgst": ${hdgst:-false}, 00:10:24.933 "ddgst": ${ddgst:-false} 00:10:24.933 }, 00:10:24.933 "method": "bdev_nvme_attach_controller" 00:10:24.933 } 00:10:24.933 EOF 00:10:24.933 )") 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.933 "params": { 00:10:24.933 "name": "Nvme1", 00:10:24.933 "trtype": "tcp", 00:10:24.933 "traddr": "10.0.0.3", 00:10:24.933 "adrfam": "ipv4", 00:10:24.933 "trsvcid": "4420", 00:10:24.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.933 "hdgst": false, 00:10:24.933 "ddgst": false 00:10:24.933 }, 00:10:24.933 "method": "bdev_nvme_attach_controller" 00:10:24.933 }' 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.933 "params": { 00:10:24.933 "name": "Nvme1", 00:10:24.933 "trtype": "tcp", 00:10:24.933 "traddr": "10.0.0.3", 00:10:24.933 "adrfam": "ipv4", 00:10:24.933 "trsvcid": "4420", 00:10:24.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.933 "hdgst": false, 00:10:24.933 "ddgst": false 00:10:24.933 }, 00:10:24.933 "method": "bdev_nvme_attach_controller" 00:10:24.933 }' 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.933 "params": { 00:10:24.933 "name": "Nvme1", 00:10:24.933 "trtype": "tcp", 00:10:24.933 "traddr": "10.0.0.3", 00:10:24.933 "adrfam": "ipv4", 00:10:24.933 "trsvcid": "4420", 00:10:24.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.933 "hdgst": false, 00:10:24.933 "ddgst": false 00:10:24.933 }, 00:10:24.933 "method": "bdev_nvme_attach_controller" 00:10:24.933 }' 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.933 "params": { 00:10:24.933 "name": "Nvme1", 00:10:24.933 "trtype": "tcp", 00:10:24.933 "traddr": "10.0.0.3", 00:10:24.933 "adrfam": "ipv4", 00:10:24.933 "trsvcid": "4420", 00:10:24.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.933 "hdgst": false, 00:10:24.933 "ddgst": false 00:10:24.933 }, 00:10:24.933 "method": "bdev_nvme_attach_controller" 00:10:24.933 }' 00:10:24.933 [2024-11-20 08:43:55.674651] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:24.933 [2024-11-20 08:43:55.675385] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:24.933 08:43:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64405 00:10:24.933 [2024-11-20 08:43:55.689833] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:24.933 [2024-11-20 08:43:55.690319] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:24.933 [2024-11-20 08:43:55.694508] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:24.933 [2024-11-20 08:43:55.694851] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:24.933 [2024-11-20 08:43:55.710745] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:24.933 [2024-11-20 08:43:55.710866] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:25.193 [2024-11-20 08:43:55.905516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.193 [2024-11-20 08:43:55.967935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:25.193 [2024-11-20 08:43:55.981131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:25.193 [2024-11-20 08:43:56.020083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.193 [2024-11-20 08:43:56.088215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:25.193 [2024-11-20 08:43:56.102278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:25.452 [2024-11-20 08:43:56.106722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.452 [2024-11-20 08:43:56.173283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:25.452 [2024-11-20 08:43:56.187216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:25.452 [2024-11-20 08:43:56.217359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.452 Running I/O for 1 seconds... 00:10:25.452 Running I/O for 1 seconds... 00:10:25.452 [2024-11-20 08:43:56.285502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:25.452 [2024-11-20 08:43:56.301149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:25.452 Running I/O for 1 seconds... 00:10:25.711 Running I/O for 1 seconds... 00:10:26.759 174896.00 IOPS, 683.19 MiB/s 00:10:26.759 Latency(us) 00:10:26.759 [2024-11-20T08:43:57.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.759 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:26.759 Nvme1n1 : 1.00 174550.36 681.84 0.00 0.00 729.62 379.81 1980.97 00:10:26.759 [2024-11-20T08:43:57.674Z] =================================================================================================================== 00:10:26.759 [2024-11-20T08:43:57.674Z] Total : 174550.36 681.84 0.00 0.00 729.62 379.81 1980.97 00:10:26.759 9779.00 IOPS, 38.20 MiB/s 00:10:26.759 Latency(us) 00:10:26.759 [2024-11-20T08:43:57.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.759 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:26.759 Nvme1n1 : 1.01 9817.73 38.35 0.00 0.00 12973.41 7447.27 17873.45 00:10:26.759 [2024-11-20T08:43:57.674Z] =================================================================================================================== 00:10:26.759 [2024-11-20T08:43:57.674Z] Total : 9817.73 38.35 0.00 0.00 12973.41 7447.27 17873.45 00:10:26.759 7568.00 IOPS, 29.56 MiB/s 00:10:26.759 Latency(us) 00:10:26.759 [2024-11-20T08:43:57.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.759 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:26.759 Nvme1n1 : 1.01 7614.16 29.74 0.00 0.00 16707.43 5153.51 25022.84 00:10:26.759 [2024-11-20T08:43:57.674Z] =================================================================================================================== 00:10:26.759 [2024-11-20T08:43:57.674Z] Total : 7614.16 29.74 0.00 0.00 16707.43 5153.51 25022.84 00:10:26.759 9292.00 IOPS, 36.30 MiB/s 00:10:26.759 Latency(us) 00:10:26.759 [2024-11-20T08:43:57.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.759 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:26.759 Nvme1n1 : 1.01 9361.45 36.57 0.00 0.00 13613.71 6732.33 22163.08 00:10:26.759 [2024-11-20T08:43:57.674Z] =================================================================================================================== 00:10:26.759 [2024-11-20T08:43:57.674Z] Total : 9361.45 36.57 0.00 0.00 13613.71 6732.33 22163.08 00:10:26.759 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64407 00:10:26.759 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64408 00:10:26.759 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64411 00:10:26.759 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.759 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.759 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:26.759 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.759 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:26.759 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:26.759 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.759 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:27.017 rmmod nvme_tcp 00:10:27.017 rmmod nvme_fabrics 00:10:27.017 rmmod nvme_keyring 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64383 ']' 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64383 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64383 ']' 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64383 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64383 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.017 killing process with pid 64383 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64383' 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64383 00:10:27.017 08:43:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64383 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:27.275 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:27.532 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:27.533 00:10:27.533 real 0m3.844s 00:10:27.533 user 0m15.268s 00:10:27.533 sys 0m2.464s 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.533 ************************************ 00:10:27.533 END TEST nvmf_bdev_io_wait 00:10:27.533 ************************************ 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:27.533 ************************************ 00:10:27.533 START TEST nvmf_queue_depth 00:10:27.533 ************************************ 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:27.533 * Looking for test storage... 00:10:27.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:27.533 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.792 --rc genhtml_branch_coverage=1 00:10:27.792 --rc genhtml_function_coverage=1 00:10:27.792 --rc genhtml_legend=1 00:10:27.792 --rc geninfo_all_blocks=1 00:10:27.792 --rc geninfo_unexecuted_blocks=1 00:10:27.792 00:10:27.792 ' 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.792 --rc genhtml_branch_coverage=1 00:10:27.792 --rc genhtml_function_coverage=1 00:10:27.792 --rc genhtml_legend=1 00:10:27.792 --rc geninfo_all_blocks=1 00:10:27.792 --rc geninfo_unexecuted_blocks=1 00:10:27.792 00:10:27.792 ' 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.792 --rc genhtml_branch_coverage=1 00:10:27.792 --rc genhtml_function_coverage=1 00:10:27.792 --rc genhtml_legend=1 00:10:27.792 --rc geninfo_all_blocks=1 00:10:27.792 --rc geninfo_unexecuted_blocks=1 00:10:27.792 00:10:27.792 ' 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.792 --rc genhtml_branch_coverage=1 00:10:27.792 --rc genhtml_function_coverage=1 00:10:27.792 --rc genhtml_legend=1 00:10:27.792 --rc geninfo_all_blocks=1 00:10:27.792 --rc geninfo_unexecuted_blocks=1 00:10:27.792 00:10:27.792 ' 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.792 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.793 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:27.793 Cannot find device "nvmf_init_br" 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:27.793 Cannot find device "nvmf_init_br2" 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:27.793 Cannot find device "nvmf_tgt_br" 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:27.793 Cannot find device "nvmf_tgt_br2" 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:27.793 Cannot find device "nvmf_init_br" 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:27.793 Cannot find device "nvmf_init_br2" 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:27.793 Cannot find device "nvmf_tgt_br" 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:27.793 Cannot find device "nvmf_tgt_br2" 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:27.793 Cannot find device "nvmf_br" 00:10:27.793 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:27.794 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:27.794 Cannot find device "nvmf_init_if" 00:10:27.794 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:27.794 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:27.794 Cannot find device "nvmf_init_if2" 00:10:27.794 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:27.794 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:27.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:27.794 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:27.794 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:27.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:27.794 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:27.794 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:27.794 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:28.087 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:28.087 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:10:28.087 00:10:28.087 --- 10.0.0.3 ping statistics --- 00:10:28.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.087 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:28.087 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:28.087 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:10:28.087 00:10:28.087 --- 10.0.0.4 ping statistics --- 00:10:28.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.087 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:28.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:28.087 00:10:28.087 --- 10.0.0.1 ping statistics --- 00:10:28.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.087 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:28.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:10:28.087 00:10:28.087 --- 10.0.0.2 ping statistics --- 00:10:28.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.087 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64672 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64672 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64672 ']' 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.087 08:43:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.345 [2024-11-20 08:43:59.021277] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:28.345 [2024-11-20 08:43:59.021370] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.345 [2024-11-20 08:43:59.169294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.345 [2024-11-20 08:43:59.258333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.345 [2024-11-20 08:43:59.258397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.345 [2024-11-20 08:43:59.258410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.345 [2024-11-20 08:43:59.258420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.345 [2024-11-20 08:43:59.258428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.345 [2024-11-20 08:43:59.258899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.603 [2024-11-20 08:43:59.332692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.603 [2024-11-20 08:43:59.459354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.603 Malloc0 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.603 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.862 [2024-11-20 08:43:59.516826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:28.862 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.862 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64702 00:10:28.862 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:28.862 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:28.862 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64702 /var/tmp/bdevperf.sock 00:10:28.862 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64702 ']' 00:10:28.862 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:28.862 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:28.862 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:28.862 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.862 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:28.862 [2024-11-20 08:43:59.579548] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:28.862 [2024-11-20 08:43:59.579663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64702 ] 00:10:28.862 [2024-11-20 08:43:59.733019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.121 [2024-11-20 08:43:59.808263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.121 [2024-11-20 08:43:59.883465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:29.121 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.121 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:29.121 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:29.121 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.121 08:43:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:29.379 NVMe0n1 00:10:29.379 08:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.379 08:44:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:29.379 Running I/O for 10 seconds... 00:10:31.252 6279.00 IOPS, 24.53 MiB/s [2024-11-20T08:44:03.544Z] 7085.00 IOPS, 27.68 MiB/s [2024-11-20T08:44:04.479Z] 7228.67 IOPS, 28.24 MiB/s [2024-11-20T08:44:05.415Z] 7291.00 IOPS, 28.48 MiB/s [2024-11-20T08:44:06.351Z] 7392.80 IOPS, 28.88 MiB/s [2024-11-20T08:44:07.288Z] 7521.33 IOPS, 29.38 MiB/s [2024-11-20T08:44:08.262Z] 7615.29 IOPS, 29.75 MiB/s [2024-11-20T08:44:09.208Z] 7680.75 IOPS, 30.00 MiB/s [2024-11-20T08:44:10.584Z] 7724.44 IOPS, 30.17 MiB/s [2024-11-20T08:44:10.584Z] 7753.70 IOPS, 30.29 MiB/s 00:10:39.669 Latency(us) 00:10:39.669 [2024-11-20T08:44:10.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.669 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:39.669 Verification LBA range: start 0x0 length 0x4000 00:10:39.669 NVMe0n1 : 10.08 7778.14 30.38 0.00 0.00 130936.86 24546.21 95325.09 00:10:39.669 [2024-11-20T08:44:10.584Z] =================================================================================================================== 00:10:39.669 [2024-11-20T08:44:10.584Z] Total : 7778.14 30.38 0.00 0.00 130936.86 24546.21 95325.09 00:10:39.669 { 00:10:39.669 "results": [ 00:10:39.669 { 00:10:39.669 "job": "NVMe0n1", 00:10:39.669 "core_mask": "0x1", 00:10:39.669 "workload": "verify", 00:10:39.669 "status": "finished", 00:10:39.669 "verify_range": { 00:10:39.669 "start": 0, 00:10:39.669 "length": 16384 00:10:39.669 }, 00:10:39.669 "queue_depth": 1024, 00:10:39.669 "io_size": 4096, 00:10:39.669 "runtime": 10.083131, 00:10:39.669 "iops": 7778.139548122503, 00:10:39.669 "mibps": 30.383357609853526, 00:10:39.669 "io_failed": 0, 00:10:39.669 "io_timeout": 0, 00:10:39.669 "avg_latency_us": 130936.85518845312, 00:10:39.669 "min_latency_us": 24546.21090909091, 00:10:39.669 "max_latency_us": 95325.09090909091 00:10:39.669 } 00:10:39.669 ], 00:10:39.669 "core_count": 1 00:10:39.669 } 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64702 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64702 ']' 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64702 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64702 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.669 killing process with pid 64702 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64702' 00:10:39.669 Received shutdown signal, test time was about 10.000000 seconds 00:10:39.669 00:10:39.669 Latency(us) 00:10:39.669 [2024-11-20T08:44:10.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.669 [2024-11-20T08:44:10.584Z] =================================================================================================================== 00:10:39.669 [2024-11-20T08:44:10.584Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64702 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64702 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:39.669 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.928 rmmod nvme_tcp 00:10:39.928 rmmod nvme_fabrics 00:10:39.928 rmmod nvme_keyring 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64672 ']' 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64672 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64672 ']' 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64672 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64672 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:39.928 killing process with pid 64672 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64672' 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64672 00:10:39.928 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64672 00:10:40.187 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.187 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.187 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.187 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:40.187 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.187 08:44:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:40.187 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:40.446 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:40.446 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:40.446 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.446 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.446 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:40.447 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.447 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.447 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.447 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:10:40.447 00:10:40.447 real 0m12.896s 00:10:40.447 user 0m21.722s 00:10:40.447 sys 0m2.334s 00:10:40.447 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.447 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.447 ************************************ 00:10:40.447 END TEST nvmf_queue_depth 00:10:40.447 ************************************ 00:10:40.447 08:44:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:40.447 08:44:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.447 08:44:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.447 08:44:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.447 ************************************ 00:10:40.447 START TEST nvmf_target_multipath 00:10:40.447 ************************************ 00:10:40.447 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:40.447 * Looking for test storage... 00:10:40.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:40.706 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:40.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.707 --rc genhtml_branch_coverage=1 00:10:40.707 --rc genhtml_function_coverage=1 00:10:40.707 --rc genhtml_legend=1 00:10:40.707 --rc geninfo_all_blocks=1 00:10:40.707 --rc geninfo_unexecuted_blocks=1 00:10:40.707 00:10:40.707 ' 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:40.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.707 --rc genhtml_branch_coverage=1 00:10:40.707 --rc genhtml_function_coverage=1 00:10:40.707 --rc genhtml_legend=1 00:10:40.707 --rc geninfo_all_blocks=1 00:10:40.707 --rc geninfo_unexecuted_blocks=1 00:10:40.707 00:10:40.707 ' 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:40.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.707 --rc genhtml_branch_coverage=1 00:10:40.707 --rc genhtml_function_coverage=1 00:10:40.707 --rc genhtml_legend=1 00:10:40.707 --rc geninfo_all_blocks=1 00:10:40.707 --rc geninfo_unexecuted_blocks=1 00:10:40.707 00:10:40.707 ' 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:40.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.707 --rc genhtml_branch_coverage=1 00:10:40.707 --rc genhtml_function_coverage=1 00:10:40.707 --rc genhtml_legend=1 00:10:40.707 --rc geninfo_all_blocks=1 00:10:40.707 --rc geninfo_unexecuted_blocks=1 00:10:40.707 00:10:40.707 ' 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.707 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:40.707 Cannot find device "nvmf_init_br" 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:40.707 Cannot find device "nvmf_init_br2" 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:40.707 Cannot find device "nvmf_tgt_br" 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.707 Cannot find device "nvmf_tgt_br2" 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:40.707 Cannot find device "nvmf_init_br" 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:40.707 Cannot find device "nvmf_init_br2" 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:40.707 Cannot find device "nvmf_tgt_br" 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:40.707 Cannot find device "nvmf_tgt_br2" 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:40.707 Cannot find device "nvmf_br" 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:10:40.707 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:40.966 Cannot find device "nvmf_init_if" 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:40.966 Cannot find device "nvmf_init_if2" 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.966 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.966 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.966 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:41.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.139 ms 00:10:41.225 00:10:41.225 --- 10.0.0.3 ping statistics --- 00:10:41.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.225 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:41.225 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:41.225 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:10:41.225 00:10:41.225 --- 10.0.0.4 ping statistics --- 00:10:41.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.225 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:41.225 00:10:41.225 --- 10.0.0.1 ping statistics --- 00:10:41.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.225 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:41.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:10:41.225 00:10:41.225 --- 10.0.0.2 ping statistics --- 00:10:41.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.225 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:41.225 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65068 00:10:41.226 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.226 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65068 00:10:41.226 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 65068 ']' 00:10:41.226 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.226 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.226 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.226 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.226 08:44:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:41.226 [2024-11-20 08:44:12.014708] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:41.226 [2024-11-20 08:44:12.014883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.485 [2024-11-20 08:44:12.164935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.485 [2024-11-20 08:44:12.244277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.485 [2024-11-20 08:44:12.244345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.485 [2024-11-20 08:44:12.244358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.485 [2024-11-20 08:44:12.244367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.485 [2024-11-20 08:44:12.244374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.485 [2024-11-20 08:44:12.245728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.485 [2024-11-20 08:44:12.245837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.485 [2024-11-20 08:44:12.245906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.485 [2024-11-20 08:44:12.245906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.485 [2024-11-20 08:44:12.318425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.421 08:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.421 08:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:10:42.421 08:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.421 08:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.421 08:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:42.421 08:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.421 08:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:42.680 [2024-11-20 08:44:13.393164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.680 08:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:42.940 Malloc0 00:10:42.940 08:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:43.199 08:44:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.457 08:44:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:43.717 [2024-11-20 08:44:14.465823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:43.717 08:44:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:43.977 [2024-11-20 08:44:14.718043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:43.977 08:44:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:43.977 08:44:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:44.235 08:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.235 08:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:10:44.235 08:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.235 08:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:44.235 08:44:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:10:46.139 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:46.139 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:46.139 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.139 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:46.139 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.139 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:10:46.139 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:46.139 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:46.139 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:46.139 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65159 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:46.140 08:44:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:46.399 [global] 00:10:46.399 thread=1 00:10:46.399 invalidate=1 00:10:46.399 rw=randrw 00:10:46.399 time_based=1 00:10:46.399 runtime=6 00:10:46.399 ioengine=libaio 00:10:46.399 direct=1 00:10:46.399 bs=4096 00:10:46.399 iodepth=128 00:10:46.399 norandommap=0 00:10:46.399 numjobs=1 00:10:46.399 00:10:46.399 verify_dump=1 00:10:46.399 verify_backlog=512 00:10:46.399 verify_state_save=0 00:10:46.399 do_verify=1 00:10:46.399 verify=crc32c-intel 00:10:46.399 [job0] 00:10:46.399 filename=/dev/nvme0n1 00:10:46.399 Could not set queue depth (nvme0n1) 00:10:46.399 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.399 fio-3.35 00:10:46.399 Starting 1 thread 00:10:47.336 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:47.594 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:47.852 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:47.852 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:47.852 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:47.852 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:47.852 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:47.853 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:47.853 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:47.853 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:47.853 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:47.853 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:47.853 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:47.853 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:47.853 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:48.112 08:44:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:48.370 08:44:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65159 00:10:52.592 00:10:52.592 job0: (groupid=0, jobs=1): err= 0: pid=65180: Wed Nov 20 08:44:23 2024 00:10:52.592 read: IOPS=9657, BW=37.7MiB/s (39.6MB/s)(226MiB/6003msec) 00:10:52.592 slat (usec): min=4, max=8998, avg=60.69, stdev=252.55 00:10:52.592 clat (usec): min=1435, max=24951, avg=8979.23, stdev=1914.54 00:10:52.592 lat (usec): min=1904, max=24992, avg=9039.91, stdev=1921.96 00:10:52.592 clat percentiles (usec): 00:10:52.592 | 1.00th=[ 4621], 5.00th=[ 6718], 10.00th=[ 7439], 20.00th=[ 7898], 00:10:52.592 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:10:52.592 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[11469], 95.00th=[12780], 00:10:52.592 | 99.00th=[15401], 99.50th=[17171], 99.90th=[23200], 99.95th=[24249], 00:10:52.592 | 99.99th=[24511] 00:10:52.592 bw ( KiB/s): min= 8992, max=24888, per=51.85%, avg=20032.00, stdev=4932.16, samples=11 00:10:52.592 iops : min= 2248, max= 6222, avg=5008.00, stdev=1233.04, samples=11 00:10:52.592 write: IOPS=5613, BW=21.9MiB/s (23.0MB/s)(120MiB/5468msec); 0 zone resets 00:10:52.592 slat (usec): min=13, max=4093, avg=70.55, stdev=179.05 00:10:52.592 clat (usec): min=1422, max=24005, avg=7850.03, stdev=1621.27 00:10:52.592 lat (usec): min=1447, max=24041, avg=7920.58, stdev=1627.86 00:10:52.592 clat percentiles (usec): 00:10:52.592 | 1.00th=[ 3490], 5.00th=[ 4621], 10.00th=[ 5997], 20.00th=[ 7046], 00:10:52.592 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8094], 00:10:52.592 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[10552], 00:10:52.592 | 99.00th=[12387], 99.50th=[13304], 99.90th=[16450], 99.95th=[17957], 00:10:52.592 | 99.99th=[20317] 00:10:52.592 bw ( KiB/s): min= 9136, max=24576, per=89.47%, avg=20087.27, stdev=4978.81, samples=11 00:10:52.592 iops : min= 2284, max= 6144, avg=5021.82, stdev=1244.70, samples=11 00:10:52.592 lat (msec) : 2=0.02%, 4=1.11%, 10=84.96%, 20=13.77%, 50=0.14% 00:10:52.592 cpu : usr=5.16%, sys=20.51%, ctx=5130, majf=0, minf=78 00:10:52.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:52.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.592 issued rwts: total=57973,30692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.592 00:10:52.592 Run status group 0 (all jobs): 00:10:52.592 READ: bw=37.7MiB/s (39.6MB/s), 37.7MiB/s-37.7MiB/s (39.6MB/s-39.6MB/s), io=226MiB (237MB), run=6003-6003msec 00:10:52.592 WRITE: bw=21.9MiB/s (23.0MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=120MiB (126MB), run=5468-5468msec 00:10:52.592 00:10:52.592 Disk stats (read/write): 00:10:52.592 nvme0n1: ios=57032/30167, merge=0/0, ticks=491895/223156, in_queue=715051, util=98.60% 00:10:52.592 08:44:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:52.851 08:44:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65267 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:53.110 08:44:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:53.110 [global] 00:10:53.110 thread=1 00:10:53.110 invalidate=1 00:10:53.110 rw=randrw 00:10:53.110 time_based=1 00:10:53.110 runtime=6 00:10:53.110 ioengine=libaio 00:10:53.110 direct=1 00:10:53.110 bs=4096 00:10:53.110 iodepth=128 00:10:53.110 norandommap=0 00:10:53.110 numjobs=1 00:10:53.110 00:10:53.368 verify_dump=1 00:10:53.368 verify_backlog=512 00:10:53.368 verify_state_save=0 00:10:53.368 do_verify=1 00:10:53.368 verify=crc32c-intel 00:10:53.368 [job0] 00:10:53.368 filename=/dev/nvme0n1 00:10:53.368 Could not set queue depth (nvme0n1) 00:10:53.368 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.368 fio-3.35 00:10:53.368 Starting 1 thread 00:10:54.304 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:54.563 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:54.822 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:54.822 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:54.822 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:54.822 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:54.822 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:54.822 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:54.823 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:54.823 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:54.823 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:54.823 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:54.823 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:54.823 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:54.823 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:55.082 08:44:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:55.340 08:44:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65267 00:10:59.540 00:10:59.540 job0: (groupid=0, jobs=1): err= 0: pid=65288: Wed Nov 20 08:44:30 2024 00:10:59.540 read: IOPS=11.4k, BW=44.5MiB/s (46.6MB/s)(267MiB/6007msec) 00:10:59.540 slat (usec): min=4, max=6747, avg=43.56, stdev=196.38 00:10:59.540 clat (usec): min=423, max=17672, avg=7679.66, stdev=1992.20 00:10:59.540 lat (usec): min=431, max=17684, avg=7723.22, stdev=2007.24 00:10:59.540 clat percentiles (usec): 00:10:59.540 | 1.00th=[ 3032], 5.00th=[ 4080], 10.00th=[ 4948], 20.00th=[ 5997], 00:10:59.540 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8225], 00:10:59.540 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11469], 00:10:59.540 | 99.00th=[13173], 99.50th=[13566], 99.90th=[14615], 99.95th=[15664], 00:10:59.540 | 99.99th=[17433] 00:10:59.540 bw ( KiB/s): min=15584, max=36624, per=53.96%, avg=24570.00, stdev=7279.33, samples=11 00:10:59.540 iops : min= 3896, max= 9156, avg=6142.45, stdev=1819.75, samples=11 00:10:59.540 write: IOPS=6724, BW=26.3MiB/s (27.5MB/s)(143MiB/5442msec); 0 zone resets 00:10:59.540 slat (usec): min=12, max=1542, avg=54.88, stdev=136.76 00:10:59.540 clat (usec): min=572, max=15826, avg=6489.79, stdev=1882.95 00:10:59.540 lat (usec): min=600, max=15874, avg=6544.67, stdev=1896.81 00:10:59.540 clat percentiles (usec): 00:10:59.540 | 1.00th=[ 2212], 5.00th=[ 3294], 10.00th=[ 3818], 20.00th=[ 4490], 00:10:59.540 | 30.00th=[ 5211], 40.00th=[ 6652], 50.00th=[ 7177], 60.00th=[ 7439], 00:10:59.540 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:10:59.540 | 99.00th=[11338], 99.50th=[12125], 99.90th=[13698], 99.95th=[14877], 00:10:59.540 | 99.99th=[15533] 00:10:59.540 bw ( KiB/s): min=16232, max=37516, per=91.24%, avg=24544.36, stdev=7123.22, samples=11 00:10:59.540 iops : min= 4058, max= 9379, avg=6136.09, stdev=1780.80, samples=11 00:10:59.540 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.05% 00:10:59.540 lat (msec) : 2=0.38%, 4=6.88%, 10=88.01%, 20=4.66% 00:10:59.540 cpu : usr=6.19%, sys=23.03%, ctx=6377, majf=0, minf=78 00:10:59.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:59.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.540 issued rwts: total=68381,36597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.540 00:10:59.540 Run status group 0 (all jobs): 00:10:59.540 READ: bw=44.5MiB/s (46.6MB/s), 44.5MiB/s-44.5MiB/s (46.6MB/s-46.6MB/s), io=267MiB (280MB), run=6007-6007msec 00:10:59.540 WRITE: bw=26.3MiB/s (27.5MB/s), 26.3MiB/s-26.3MiB/s (27.5MB/s-27.5MB/s), io=143MiB (150MB), run=5442-5442msec 00:10:59.540 00:10:59.540 Disk stats (read/write): 00:10:59.540 nvme0n1: ios=67657/35822, merge=0/0, ticks=492968/213878, in_queue=706846, util=98.58% 00:10:59.540 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:59.800 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.800 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:59.800 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:59.800 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.800 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.800 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:59.800 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:59.800 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.154 rmmod nvme_tcp 00:11:00.154 rmmod nvme_fabrics 00:11:00.154 rmmod nvme_keyring 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65068 ']' 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65068 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 65068 ']' 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 65068 00:11:00.154 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:11:00.155 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.155 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65068 00:11:00.155 killing process with pid 65068 00:11:00.155 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.155 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.155 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65068' 00:11:00.155 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 65068 00:11:00.155 08:44:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 65068 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:00.414 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:00.673 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.673 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:00.674 00:11:00.674 real 0m20.265s 00:11:00.674 user 1m15.458s 00:11:00.674 sys 0m9.494s 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:00.674 ************************************ 00:11:00.674 END TEST nvmf_target_multipath 00:11:00.674 ************************************ 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.674 08:44:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.934 ************************************ 00:11:00.934 START TEST nvmf_zcopy 00:11:00.934 ************************************ 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:00.934 * Looking for test storage... 00:11:00.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.934 --rc genhtml_branch_coverage=1 00:11:00.934 --rc genhtml_function_coverage=1 00:11:00.934 --rc genhtml_legend=1 00:11:00.934 --rc geninfo_all_blocks=1 00:11:00.934 --rc geninfo_unexecuted_blocks=1 00:11:00.934 00:11:00.934 ' 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.934 --rc genhtml_branch_coverage=1 00:11:00.934 --rc genhtml_function_coverage=1 00:11:00.934 --rc genhtml_legend=1 00:11:00.934 --rc geninfo_all_blocks=1 00:11:00.934 --rc geninfo_unexecuted_blocks=1 00:11:00.934 00:11:00.934 ' 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.934 --rc genhtml_branch_coverage=1 00:11:00.934 --rc genhtml_function_coverage=1 00:11:00.934 --rc genhtml_legend=1 00:11:00.934 --rc geninfo_all_blocks=1 00:11:00.934 --rc geninfo_unexecuted_blocks=1 00:11:00.934 00:11:00.934 ' 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.934 --rc genhtml_branch_coverage=1 00:11:00.934 --rc genhtml_function_coverage=1 00:11:00.934 --rc genhtml_legend=1 00:11:00.934 --rc geninfo_all_blocks=1 00:11:00.934 --rc geninfo_unexecuted_blocks=1 00:11:00.934 00:11:00.934 ' 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.934 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.935 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.935 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:01.194 Cannot find device "nvmf_init_br" 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:01.194 Cannot find device "nvmf_init_br2" 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:01.194 Cannot find device "nvmf_tgt_br" 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:01.194 Cannot find device "nvmf_tgt_br2" 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:01.194 Cannot find device "nvmf_init_br" 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:01.194 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:01.194 Cannot find device "nvmf_init_br2" 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:01.195 Cannot find device "nvmf_tgt_br" 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:01.195 Cannot find device "nvmf_tgt_br2" 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:01.195 Cannot find device "nvmf_br" 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:01.195 Cannot find device "nvmf_init_if" 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:01.195 Cannot find device "nvmf_init_if2" 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:01.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:01.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:01.195 08:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:01.195 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:01.195 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:01.195 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:01.195 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:01.195 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:01.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:01.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:11:01.454 00:11:01.454 --- 10.0.0.3 ping statistics --- 00:11:01.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.454 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:01.454 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:01.454 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:01.454 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:11:01.454 00:11:01.454 --- 10.0.0.4 ping statistics --- 00:11:01.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.455 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:01.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:01.455 00:11:01.455 --- 10.0.0.1 ping statistics --- 00:11:01.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.455 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:01.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:01.455 00:11:01.455 --- 10.0.0.2 ping statistics --- 00:11:01.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.455 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65593 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65593 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65593 ']' 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.455 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.714 [2024-11-20 08:44:32.374778] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:01.714 [2024-11-20 08:44:32.374900] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.714 [2024-11-20 08:44:32.529335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.714 [2024-11-20 08:44:32.614273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.714 [2024-11-20 08:44:32.614558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.714 [2024-11-20 08:44:32.614675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.714 [2024-11-20 08:44:32.614774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.714 [2024-11-20 08:44:32.614883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.714 [2024-11-20 08:44:32.615479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.973 [2024-11-20 08:44:32.690351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.973 [2024-11-20 08:44:32.821337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.973 [2024-11-20 08:44:32.837490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.973 malloc0 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:01.973 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:01.973 { 00:11:01.973 "params": { 00:11:01.973 "name": "Nvme$subsystem", 00:11:01.973 "trtype": "$TEST_TRANSPORT", 00:11:01.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:01.973 "adrfam": "ipv4", 00:11:01.973 "trsvcid": "$NVMF_PORT", 00:11:01.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:01.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:01.973 "hdgst": ${hdgst:-false}, 00:11:01.973 "ddgst": ${ddgst:-false} 00:11:01.973 }, 00:11:01.973 "method": "bdev_nvme_attach_controller" 00:11:01.973 } 00:11:01.973 EOF 00:11:01.973 )") 00:11:02.232 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:02.233 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:02.233 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:02.233 08:44:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:02.233 "params": { 00:11:02.233 "name": "Nvme1", 00:11:02.233 "trtype": "tcp", 00:11:02.233 "traddr": "10.0.0.3", 00:11:02.233 "adrfam": "ipv4", 00:11:02.233 "trsvcid": "4420", 00:11:02.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:02.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:02.233 "hdgst": false, 00:11:02.233 "ddgst": false 00:11:02.233 }, 00:11:02.233 "method": "bdev_nvme_attach_controller" 00:11:02.233 }' 00:11:02.233 [2024-11-20 08:44:32.941086] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:02.233 [2024-11-20 08:44:32.941219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65624 ] 00:11:02.233 [2024-11-20 08:44:33.094277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.492 [2024-11-20 08:44:33.181217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.492 [2024-11-20 08:44:33.266027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:02.492 Running I/O for 10 seconds... 00:11:04.805 5761.00 IOPS, 45.01 MiB/s [2024-11-20T08:44:36.728Z] 5857.50 IOPS, 45.76 MiB/s [2024-11-20T08:44:37.666Z] 5880.67 IOPS, 45.94 MiB/s [2024-11-20T08:44:38.622Z] 5864.75 IOPS, 45.82 MiB/s [2024-11-20T08:44:39.558Z] 5896.60 IOPS, 46.07 MiB/s [2024-11-20T08:44:40.494Z] 5881.50 IOPS, 45.95 MiB/s [2024-11-20T08:44:41.431Z] 5889.14 IOPS, 46.01 MiB/s [2024-11-20T08:44:42.809Z] 5879.00 IOPS, 45.93 MiB/s [2024-11-20T08:44:43.750Z] 5870.56 IOPS, 45.86 MiB/s [2024-11-20T08:44:43.750Z] 5864.60 IOPS, 45.82 MiB/s 00:11:12.835 Latency(us) 00:11:12.835 [2024-11-20T08:44:43.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.835 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:12.835 Verification LBA range: start 0x0 length 0x1000 00:11:12.835 Nvme1n1 : 10.02 5866.89 45.84 0.00 0.00 21747.29 3321.48 33125.47 00:11:12.835 [2024-11-20T08:44:43.750Z] =================================================================================================================== 00:11:12.835 [2024-11-20T08:44:43.750Z] Total : 5866.89 45.84 0.00 0.00 21747.29 3321.48 33125.47 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65741 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:12.835 { 00:11:12.835 "params": { 00:11:12.835 "name": "Nvme$subsystem", 00:11:12.835 "trtype": "$TEST_TRANSPORT", 00:11:12.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:12.835 "adrfam": "ipv4", 00:11:12.835 "trsvcid": "$NVMF_PORT", 00:11:12.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:12.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:12.835 "hdgst": ${hdgst:-false}, 00:11:12.835 "ddgst": ${ddgst:-false} 00:11:12.835 }, 00:11:12.835 "method": "bdev_nvme_attach_controller" 00:11:12.835 } 00:11:12.835 EOF 00:11:12.835 )") 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:12.835 [2024-11-20 08:44:43.712362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.835 [2024-11-20 08:44:43.712411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:12.835 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:12.835 "params": { 00:11:12.835 "name": "Nvme1", 00:11:12.835 "trtype": "tcp", 00:11:12.835 "traddr": "10.0.0.3", 00:11:12.835 "adrfam": "ipv4", 00:11:12.835 "trsvcid": "4420", 00:11:12.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.835 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:12.835 "hdgst": false, 00:11:12.835 "ddgst": false 00:11:12.835 }, 00:11:12.835 "method": "bdev_nvme_attach_controller" 00:11:12.835 }' 00:11:12.835 [2024-11-20 08:44:43.724302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.835 [2024-11-20 08:44:43.724332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.835 [2024-11-20 08:44:43.736329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.835 [2024-11-20 08:44:43.736373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.752355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.752406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.757132] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:13.094 [2024-11-20 08:44:43.757246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65741 ] 00:11:13.094 [2024-11-20 08:44:43.764327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.764364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.776330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.776370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.788332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.788372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.800301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.800327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.812303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.812332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.824332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.824368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.836332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.836366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.848363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.848402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.860340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.860375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.872335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.872368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.884338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.884387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.896323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.896365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.901935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.094 [2024-11-20 08:44:43.908373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.908437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.920432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.920478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.932427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.932470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.944417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.944458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.956386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.956417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.968352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.968409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.980396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.980426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:43.987668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.094 [2024-11-20 08:44:43.992418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.094 [2024-11-20 08:44:43.992449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.094 [2024-11-20 08:44:44.004407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.095 [2024-11-20 08:44:44.004457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.353 [2024-11-20 08:44:44.016435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.353 [2024-11-20 08:44:44.016479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.353 [2024-11-20 08:44:44.028415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.353 [2024-11-20 08:44:44.028457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.353 [2024-11-20 08:44:44.040408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.353 [2024-11-20 08:44:44.040459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.353 [2024-11-20 08:44:44.052405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.353 [2024-11-20 08:44:44.052442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.353 [2024-11-20 08:44:44.064425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.353 [2024-11-20 08:44:44.064458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.353 [2024-11-20 08:44:44.072541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:13.353 [2024-11-20 08:44:44.076453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.353 [2024-11-20 08:44:44.076494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.353 [2024-11-20 08:44:44.088434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.353 [2024-11-20 08:44:44.088468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.353 [2024-11-20 08:44:44.100471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.353 [2024-11-20 08:44:44.100504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.353 [2024-11-20 08:44:44.112438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.353 [2024-11-20 08:44:44.112465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.354 [2024-11-20 08:44:44.124447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.354 [2024-11-20 08:44:44.124469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.354 [2024-11-20 08:44:44.136468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.354 [2024-11-20 08:44:44.136500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.354 [2024-11-20 08:44:44.148474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.354 [2024-11-20 08:44:44.148501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.354 [2024-11-20 08:44:44.160462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.354 [2024-11-20 08:44:44.160490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.354 [2024-11-20 08:44:44.172491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.354 [2024-11-20 08:44:44.172519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.354 [2024-11-20 08:44:44.184505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.354 [2024-11-20 08:44:44.184535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.354 [2024-11-20 08:44:44.196529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.354 [2024-11-20 08:44:44.196563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.354 [2024-11-20 08:44:44.208536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.354 [2024-11-20 08:44:44.208562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.354 Running I/O for 5 seconds... 00:11:13.354 [2024-11-20 08:44:44.223058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.354 [2024-11-20 08:44:44.223091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.354 [2024-11-20 08:44:44.237859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.354 [2024-11-20 08:44:44.237918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.354 [2024-11-20 08:44:44.253801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.354 [2024-11-20 08:44:44.253886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.272098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.272135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.286714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.286749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.302563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.302618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.321008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.321077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.336107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.336174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.352024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.352110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.368512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.368591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.386682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.386762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.402635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.402712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.419493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.419543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.436078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.436151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.452653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.452711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.469131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.469228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.486193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.486254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.503326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.503392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.613 [2024-11-20 08:44:44.520146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.613 [2024-11-20 08:44:44.520204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.536633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.536717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.554065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.554125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.570194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.570255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.587242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.587299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.603290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.603339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.621014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.621072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.636628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.636708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.646494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.646548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.661602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.661658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.676878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.676933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.687431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.687485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.702313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.702364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.719866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.719907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.734964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.735034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.745176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.745223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.760098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.760148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.871 [2024-11-20 08:44:44.770545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.871 [2024-11-20 08:44:44.770589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.785630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.785716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.803926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.803967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.818655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.818708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.833824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.833890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.851791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.851871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.866847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.866896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.877551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.877589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.892668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.892706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.910014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.910070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.926605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.926689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.944378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.944447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.960876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.960929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.977281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.977317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:44.993808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:44.993878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:45.010294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:45.010361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.130 [2024-11-20 08:44:45.028179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.130 [2024-11-20 08:44:45.028243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.043598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.043665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.054214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.054249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.069758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.069792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.084898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.084946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.102761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.102842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.118168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.118249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.135580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.135651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.152487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.152526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.168701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.168759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.185132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.185204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.203490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.203552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 11237.00 IOPS, 87.79 MiB/s [2024-11-20T08:44:45.304Z] [2024-11-20 08:44:45.218512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.218548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.234688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.234746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.251405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.251475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.267348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.267404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.276837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.276884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.389 [2024-11-20 08:44:45.292190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.389 [2024-11-20 08:44:45.292236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.308548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.308582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.318159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.318206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.334351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.334387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.351524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.351565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.367395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.367452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.377096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.377129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.393147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.393181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.411273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.411308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.426107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.426155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.444151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.444187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.459502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.459558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.477847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.477880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.493207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.493255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.511381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.511415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.526336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.526372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.535964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.535999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.648 [2024-11-20 08:44:45.552216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.648 [2024-11-20 08:44:45.552287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.567794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.567855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.577008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.577040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.594166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.594214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.610551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.610584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.627333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.627405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.643885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.643942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.660353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.660390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.677942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.677978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.693172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.693225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.703082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.703132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.718047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.718100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.733589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.733658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.751344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.751398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.765787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.765848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.781887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.781943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.798297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.798345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:14.907 [2024-11-20 08:44:45.814892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:14.907 [2024-11-20 08:44:45.814926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:45.831067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:45.831104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:45.848823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:45.848886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:45.869776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:45.869868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:45.885309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:45.885369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:45.902472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:45.902548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:45.918571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:45.918633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:45.936735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:45.936794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:45.951804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:45.951910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:45.967808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:45.967923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:45.984102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:45.984168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:46.002688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:46.002738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:46.018124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:46.018195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:46.035068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:46.035119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:46.051380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:46.051437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.166 [2024-11-20 08:44:46.068919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.166 [2024-11-20 08:44:46.068957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.086541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.086576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.101584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.101619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.119216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.119248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.134509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.134557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.144567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.144608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.159719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.159767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.176887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.176942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.193745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.193852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.209641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.209691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 11345.00 IOPS, 88.63 MiB/s [2024-11-20T08:44:46.340Z] [2024-11-20 08:44:46.218999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.219045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.235309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.235375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.251445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.251505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.267005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.267061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.282387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.282433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.299666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.425 [2024-11-20 08:44:46.299725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.425 [2024-11-20 08:44:46.316268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.426 [2024-11-20 08:44:46.316318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.426 [2024-11-20 08:44:46.332671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.426 [2024-11-20 08:44:46.332732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.350083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.684 [2024-11-20 08:44:46.350167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.366292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.684 [2024-11-20 08:44:46.366361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.383248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.684 [2024-11-20 08:44:46.383303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.400037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.684 [2024-11-20 08:44:46.400113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.416373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.684 [2024-11-20 08:44:46.416427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.432967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.684 [2024-11-20 08:44:46.433021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.452104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.684 [2024-11-20 08:44:46.452142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.467638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.684 [2024-11-20 08:44:46.467672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.483747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.684 [2024-11-20 08:44:46.483781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.502898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.684 [2024-11-20 08:44:46.502949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.517963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.684 [2024-11-20 08:44:46.518024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.684 [2024-11-20 08:44:46.534185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.685 [2024-11-20 08:44:46.534244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.685 [2024-11-20 08:44:46.550831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.685 [2024-11-20 08:44:46.550908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.685 [2024-11-20 08:44:46.567382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.685 [2024-11-20 08:44:46.567443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.685 [2024-11-20 08:44:46.583643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.685 [2024-11-20 08:44:46.583736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.600350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.600407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.617435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.617494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.633906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.633947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.650873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.650930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.667878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.667945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.684295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.684331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.700936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.700974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.717635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.717672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.734295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.734330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.752357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.752405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.767370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.767404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.944 [2024-11-20 08:44:46.776993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.944 [2024-11-20 08:44:46.777026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.945 [2024-11-20 08:44:46.789215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.945 [2024-11-20 08:44:46.789247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.945 [2024-11-20 08:44:46.804397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.945 [2024-11-20 08:44:46.804431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.945 [2024-11-20 08:44:46.821257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.945 [2024-11-20 08:44:46.821290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.945 [2024-11-20 08:44:46.837005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.945 [2024-11-20 08:44:46.837039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.945 [2024-11-20 08:44:46.846337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.945 [2024-11-20 08:44:46.846369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:46.862123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:46.862153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:46.879699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:46.879733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:46.894953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:46.894984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:46.904722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:46.904754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:46.921099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:46.921139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:46.938510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:46.938543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:46.954875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:46.954910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:46.972579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:46.972699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:46.987771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:46.987855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:47.005064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:47.005117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:47.019654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:47.019696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:47.035548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:47.035598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:47.052475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:47.052508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:47.069370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:47.069403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:47.085343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:47.085374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:47.101630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:47.101678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.206 [2024-11-20 08:44:47.118283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.206 [2024-11-20 08:44:47.118316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.134893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.134927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.151256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.151306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.168306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.168390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.184328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.184418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.194515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.194566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.209322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.209380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 11363.33 IOPS, 88.78 MiB/s [2024-11-20T08:44:47.380Z] [2024-11-20 08:44:47.226705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.226761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.241508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.241560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.257517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.257568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.276116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.276159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.291683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.291738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.306618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.306671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.322104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.322161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.331569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.331601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.347719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.347749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.465 [2024-11-20 08:44:47.364623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.465 [2024-11-20 08:44:47.364653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.382833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.382873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.396713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.396744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.412895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.412939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.429727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.429758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.445929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.445971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.464467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.464523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.480066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.480160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.497060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.497127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.513397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.513445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.529961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.529994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.546404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.546458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.564255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.564311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.579428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.579481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.589084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.589125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.604956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.605000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.724 [2024-11-20 08:44:47.622313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.724 [2024-11-20 08:44:47.622357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.983 [2024-11-20 08:44:47.638488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.983 [2024-11-20 08:44:47.638539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.983 [2024-11-20 08:44:47.654784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.983 [2024-11-20 08:44:47.654865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.983 [2024-11-20 08:44:47.672315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.983 [2024-11-20 08:44:47.672376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.983 [2024-11-20 08:44:47.688934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.983 [2024-11-20 08:44:47.688984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.983 [2024-11-20 08:44:47.705165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.983 [2024-11-20 08:44:47.705206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.983 [2024-11-20 08:44:47.724654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.983 [2024-11-20 08:44:47.724712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.983 [2024-11-20 08:44:47.739870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.983 [2024-11-20 08:44:47.739937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.983 [2024-11-20 08:44:47.757804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.983 [2024-11-20 08:44:47.757894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.983 [2024-11-20 08:44:47.773945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.984 [2024-11-20 08:44:47.774011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.984 [2024-11-20 08:44:47.790792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.984 [2024-11-20 08:44:47.790864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.984 [2024-11-20 08:44:47.806560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.984 [2024-11-20 08:44:47.806616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.984 [2024-11-20 08:44:47.823571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.984 [2024-11-20 08:44:47.823630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.984 [2024-11-20 08:44:47.840781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.984 [2024-11-20 08:44:47.840865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.984 [2024-11-20 08:44:47.854643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.984 [2024-11-20 08:44:47.854695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.984 [2024-11-20 08:44:47.870646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.984 [2024-11-20 08:44:47.870696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.984 [2024-11-20 08:44:47.888278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.984 [2024-11-20 08:44:47.888325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:47.903760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:47.903833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:47.913258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:47.913301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:47.929628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:47.929700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:47.946523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:47.946573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:47.963283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:47.963334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:47.979015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:47.979079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:47.989401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:47.989480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:48.005149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:48.005222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:48.019963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:48.020011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:48.036162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:48.036196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:48.053715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:48.053775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:48.069572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:48.069628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:48.087978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:48.088042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:48.102612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:48.102644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:48.118311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:48.118344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:48.127937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:48.127967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.243 [2024-11-20 08:44:48.142256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.243 [2024-11-20 08:44:48.142287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.157834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.157896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.174043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.174075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.191761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.191794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.206767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.206863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 11386.25 IOPS, 88.96 MiB/s [2024-11-20T08:44:48.417Z] [2024-11-20 08:44:48.217493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.217537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.232708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.232741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.248827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.248914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.265623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.265680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.281957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.282012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.300687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.300749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.315326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.315370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.331054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.331104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.348999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.349032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.363485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.363518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.379115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.379148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.396752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.396785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.502 [2024-11-20 08:44:48.412142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.502 [2024-11-20 08:44:48.412174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.761 [2024-11-20 08:44:48.421607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.761 [2024-11-20 08:44:48.421639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.761 [2024-11-20 08:44:48.437939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.761 [2024-11-20 08:44:48.437998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.761 [2024-11-20 08:44:48.454956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.761 [2024-11-20 08:44:48.455014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.761 [2024-11-20 08:44:48.471412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.761 [2024-11-20 08:44:48.471473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.761 [2024-11-20 08:44:48.489455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.761 [2024-11-20 08:44:48.489519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.761 [2024-11-20 08:44:48.504651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.761 [2024-11-20 08:44:48.504715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.761 [2024-11-20 08:44:48.520641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.761 [2024-11-20 08:44:48.520708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.761 [2024-11-20 08:44:48.537354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.761 [2024-11-20 08:44:48.537416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.761 [2024-11-20 08:44:48.554553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.761 [2024-11-20 08:44:48.554603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.761 [2024-11-20 08:44:48.569513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.761 [2024-11-20 08:44:48.569575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.761 [2024-11-20 08:44:48.585930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.762 [2024-11-20 08:44:48.585984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.762 [2024-11-20 08:44:48.602088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.762 [2024-11-20 08:44:48.602131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.762 [2024-11-20 08:44:48.619814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.762 [2024-11-20 08:44:48.619851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.762 [2024-11-20 08:44:48.635464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.762 [2024-11-20 08:44:48.635504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.762 [2024-11-20 08:44:48.652768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.762 [2024-11-20 08:44:48.652823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.762 [2024-11-20 08:44:48.669731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.762 [2024-11-20 08:44:48.669772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.686458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.686498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.704148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.704185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.718254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.718288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.734152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.734195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.750659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.750696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.768073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.768129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.785732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.785775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.800350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.800388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.815978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.816033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.834123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.834180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.849210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.849259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.859046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.859083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.875502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.875554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.885728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.885782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.901554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.901618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.916697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.916762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.021 [2024-11-20 08:44:48.932657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.021 [2024-11-20 08:44:48.932716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:48.949566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:48.949615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:48.966034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:48.966080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:48.975345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:48.975379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:48.990687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:48.990724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.002603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.002640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.018338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.018378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.035358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.035399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.052752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.052816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.067944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.068017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.077773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.077837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.094010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.094087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.110298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.110369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.127016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.127084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.143138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.143204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.159451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.159519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.168851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.168891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.281 [2024-11-20 08:44:49.185781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.281 [2024-11-20 08:44:49.185841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.539 [2024-11-20 08:44:49.202401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.539 [2024-11-20 08:44:49.202462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.539 11416.80 IOPS, 89.19 MiB/s [2024-11-20T08:44:49.454Z] [2024-11-20 08:44:49.219283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.539 [2024-11-20 08:44:49.219323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.539 00:11:18.539 Latency(us) 00:11:18.539 [2024-11-20T08:44:49.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:18.539 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:18.539 Nvme1n1 : 5.01 11415.55 89.18 0.00 0.00 11196.53 4676.89 20018.27 00:11:18.539 [2024-11-20T08:44:49.454Z] =================================================================================================================== 00:11:18.539 [2024-11-20T08:44:49.454Z] Total : 11415.55 89.18 0.00 0.00 11196.53 4676.89 20018.27 00:11:18.539 [2024-11-20 08:44:49.231082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.539 [2024-11-20 08:44:49.231116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.539 [2024-11-20 08:44:49.243086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.539 [2024-11-20 08:44:49.243122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.539 [2024-11-20 08:44:49.255123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.539 [2024-11-20 08:44:49.255173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.539 [2024-11-20 08:44:49.267097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.539 [2024-11-20 08:44:49.267141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.539 [2024-11-20 08:44:49.279121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.539 [2024-11-20 08:44:49.279205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.539 [2024-11-20 08:44:49.291106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.539 [2024-11-20 08:44:49.291149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.303106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.303149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.315106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.315148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.327110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.327151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.339124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.339165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.351130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.351174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.363144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.363196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.375125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.375171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.387126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.387179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.399130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.399169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.411121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.411166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.423113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.423145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.435123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.435156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.540 [2024-11-20 08:44:49.447165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.540 [2024-11-20 08:44:49.447212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.799 [2024-11-20 08:44:49.459158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.799 [2024-11-20 08:44:49.459204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.799 [2024-11-20 08:44:49.471143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.799 [2024-11-20 08:44:49.471180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.799 [2024-11-20 08:44:49.483134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.799 [2024-11-20 08:44:49.483168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.799 [2024-11-20 08:44:49.495141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.799 [2024-11-20 08:44:49.495173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.799 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65741) - No such process 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65741 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.799 delay0 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.799 08:44:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:18.799 [2024-11-20 08:44:49.711664] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:25.381 Initializing NVMe Controllers 00:11:25.381 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.381 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:25.381 Initialization complete. Launching workers. 00:11:25.381 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 846 00:11:25.382 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1133, failed to submit 33 00:11:25.382 success 1018, unsuccessful 115, failed 0 00:11:25.382 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:25.382 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:25.382 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.382 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:25.382 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.382 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:25.382 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.382 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.382 rmmod nvme_tcp 00:11:25.382 rmmod nvme_fabrics 00:11:25.382 rmmod nvme_keyring 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65593 ']' 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65593 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65593 ']' 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65593 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65593 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:25.382 killing process with pid 65593 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65593' 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65593 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65593 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:25.382 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:25.666 00:11:25.666 real 0m24.899s 00:11:25.666 user 0m40.557s 00:11:25.666 sys 0m7.088s 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.666 ************************************ 00:11:25.666 END TEST nvmf_zcopy 00:11:25.666 ************************************ 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:25.666 08:44:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.667 08:44:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:25.667 ************************************ 00:11:25.667 START TEST nvmf_nmic 00:11:25.667 ************************************ 00:11:25.667 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:25.927 * Looking for test storage... 00:11:25.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.927 --rc genhtml_branch_coverage=1 00:11:25.927 --rc genhtml_function_coverage=1 00:11:25.927 --rc genhtml_legend=1 00:11:25.927 --rc geninfo_all_blocks=1 00:11:25.927 --rc geninfo_unexecuted_blocks=1 00:11:25.927 00:11:25.927 ' 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.927 --rc genhtml_branch_coverage=1 00:11:25.927 --rc genhtml_function_coverage=1 00:11:25.927 --rc genhtml_legend=1 00:11:25.927 --rc geninfo_all_blocks=1 00:11:25.927 --rc geninfo_unexecuted_blocks=1 00:11:25.927 00:11:25.927 ' 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.927 --rc genhtml_branch_coverage=1 00:11:25.927 --rc genhtml_function_coverage=1 00:11:25.927 --rc genhtml_legend=1 00:11:25.927 --rc geninfo_all_blocks=1 00:11:25.927 --rc geninfo_unexecuted_blocks=1 00:11:25.927 00:11:25.927 ' 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.927 --rc genhtml_branch_coverage=1 00:11:25.927 --rc genhtml_function_coverage=1 00:11:25.927 --rc genhtml_legend=1 00:11:25.927 --rc geninfo_all_blocks=1 00:11:25.927 --rc geninfo_unexecuted_blocks=1 00:11:25.927 00:11:25.927 ' 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.927 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.928 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:25.928 Cannot find device "nvmf_init_br" 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:25.928 Cannot find device "nvmf_init_br2" 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:25.928 Cannot find device "nvmf_tgt_br" 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:25.928 Cannot find device "nvmf_tgt_br2" 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:25.928 Cannot find device "nvmf_init_br" 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:25.928 Cannot find device "nvmf_init_br2" 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:25.928 Cannot find device "nvmf_tgt_br" 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:25.928 Cannot find device "nvmf_tgt_br2" 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:25.928 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:25.928 Cannot find device "nvmf_br" 00:11:26.187 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:26.187 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:26.187 Cannot find device "nvmf_init_if" 00:11:26.187 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:26.187 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:26.187 Cannot find device "nvmf_init_if2" 00:11:26.187 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:26.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:26.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:26.188 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:26.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:26.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:11:26.188 00:11:26.188 --- 10.0.0.3 ping statistics --- 00:11:26.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.188 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:26.188 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:26.188 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:11:26.188 00:11:26.188 --- 10.0.0.4 ping statistics --- 00:11:26.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.188 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:26.188 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:26.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:26.447 00:11:26.447 --- 10.0.0.1 ping statistics --- 00:11:26.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.447 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:26.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:11:26.447 00:11:26.447 --- 10.0.0.2 ping statistics --- 00:11:26.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.447 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:26.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66118 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66118 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 66118 ']' 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.447 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:26.447 [2024-11-20 08:44:57.186236] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:26.447 [2024-11-20 08:44:57.186347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.447 [2024-11-20 08:44:57.339591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.706 [2024-11-20 08:44:57.425818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.706 [2024-11-20 08:44:57.425885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.706 [2024-11-20 08:44:57.425897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.706 [2024-11-20 08:44:57.425906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.706 [2024-11-20 08:44:57.425913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.706 [2024-11-20 08:44:57.427247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.706 [2024-11-20 08:44:57.427287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.706 [2024-11-20 08:44:57.427389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.706 [2024-11-20 08:44:57.427393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.706 [2024-11-20 08:44:57.498353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:27.649 [2024-11-20 08:44:58.300058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:27.649 Malloc0 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:27.649 [2024-11-20 08:44:58.375845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:27.649 test case1: single bdev can't be used in multiple subsystems 00:11:27.649 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:27.650 [2024-11-20 08:44:58.399659] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:27.650 [2024-11-20 08:44:58.399723] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:27.650 [2024-11-20 08:44:58.399736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.650 request: 00:11:27.650 { 00:11:27.650 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:27.650 "namespace": { 00:11:27.650 "bdev_name": "Malloc0", 00:11:27.650 "no_auto_visible": false 00:11:27.650 }, 00:11:27.650 "method": "nvmf_subsystem_add_ns", 00:11:27.650 "req_id": 1 00:11:27.650 } 00:11:27.650 Got JSON-RPC error response 00:11:27.650 response: 00:11:27.650 { 00:11:27.650 "code": -32602, 00:11:27.650 "message": "Invalid parameters" 00:11:27.650 } 00:11:27.650 Adding namespace failed - expected result. 00:11:27.650 test case2: host connect to nvmf target in multiple paths 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:27.650 [2024-11-20 08:44:58.411865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:27.650 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:27.909 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.909 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:27.909 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.909 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:27.909 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.812 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.812 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.812 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.812 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.812 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.812 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:29.812 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:29.812 [global] 00:11:29.812 thread=1 00:11:29.812 invalidate=1 00:11:29.812 rw=write 00:11:29.812 time_based=1 00:11:29.812 runtime=1 00:11:29.812 ioengine=libaio 00:11:29.812 direct=1 00:11:29.812 bs=4096 00:11:29.812 iodepth=1 00:11:29.812 norandommap=0 00:11:29.812 numjobs=1 00:11:29.812 00:11:29.812 verify_dump=1 00:11:29.812 verify_backlog=512 00:11:29.812 verify_state_save=0 00:11:29.812 do_verify=1 00:11:29.812 verify=crc32c-intel 00:11:29.812 [job0] 00:11:29.812 filename=/dev/nvme0n1 00:11:30.071 Could not set queue depth (nvme0n1) 00:11:30.071 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:30.071 fio-3.35 00:11:30.071 Starting 1 thread 00:11:31.449 00:11:31.449 job0: (groupid=0, jobs=1): err= 0: pid=66210: Wed Nov 20 08:45:02 2024 00:11:31.449 read: IOPS=2565, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:31.449 slat (nsec): min=11312, max=46305, avg=13741.36, stdev=3011.46 00:11:31.449 clat (usec): min=139, max=2062, avg=198.80, stdev=43.47 00:11:31.449 lat (usec): min=154, max=2074, avg=212.54, stdev=43.56 00:11:31.449 clat percentiles (usec): 00:11:31.449 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 180], 00:11:31.449 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:11:31.449 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 235], 00:11:31.449 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 420], 99.95th=[ 433], 00:11:31.449 | 99.99th=[ 2057] 00:11:31.449 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:31.449 slat (usec): min=16, max=115, avg=21.51, stdev= 5.70 00:11:31.449 clat (usec): min=84, max=504, avg=123.57, stdev=23.29 00:11:31.449 lat (usec): min=104, max=525, avg=145.08, stdev=24.51 00:11:31.449 clat percentiles (usec): 00:11:31.449 | 1.00th=[ 94], 5.00th=[ 99], 10.00th=[ 103], 20.00th=[ 109], 00:11:31.449 | 30.00th=[ 114], 40.00th=[ 118], 50.00th=[ 122], 60.00th=[ 126], 00:11:31.449 | 70.00th=[ 131], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 151], 00:11:31.449 | 99.00th=[ 169], 99.50th=[ 190], 99.90th=[ 412], 99.95th=[ 482], 00:11:31.449 | 99.99th=[ 506] 00:11:31.449 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:31.449 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:31.449 lat (usec) : 100=3.35%, 250=95.85%, 500=0.76%, 750=0.02% 00:11:31.449 lat (msec) : 4=0.02% 00:11:31.449 cpu : usr=2.40%, sys=7.60%, ctx=5640, majf=0, minf=5 00:11:31.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.449 issued rwts: total=2568,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.449 00:11:31.449 Run status group 0 (all jobs): 00:11:31.449 READ: bw=10.0MiB/s (10.5MB/s), 10.0MiB/s-10.0MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:11:31.449 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:31.449 00:11:31.449 Disk stats (read/write): 00:11:31.449 nvme0n1: ios=2477/2560, merge=0/0, ticks=528/339, in_queue=867, util=91.68% 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.449 rmmod nvme_tcp 00:11:31.449 rmmod nvme_fabrics 00:11:31.449 rmmod nvme_keyring 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66118 ']' 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66118 00:11:31.449 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 66118 ']' 00:11:31.450 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 66118 00:11:31.450 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:31.450 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.450 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66118 00:11:31.450 killing process with pid 66118 00:11:31.450 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.450 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.450 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66118' 00:11:31.450 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 66118 00:11:31.450 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 66118 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:31.709 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:31.968 ************************************ 00:11:31.968 END TEST nvmf_nmic 00:11:31.968 ************************************ 00:11:31.968 00:11:31.968 real 0m6.264s 00:11:31.968 user 0m19.528s 00:11:31.968 sys 0m2.230s 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:31.968 08:45:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:31.969 08:45:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.969 08:45:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:31.969 ************************************ 00:11:31.969 START TEST nvmf_fio_target 00:11:31.969 ************************************ 00:11:31.969 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:32.229 * Looking for test storage... 00:11:32.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:32.229 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:32.229 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:32.229 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:32.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.229 --rc genhtml_branch_coverage=1 00:11:32.229 --rc genhtml_function_coverage=1 00:11:32.229 --rc genhtml_legend=1 00:11:32.229 --rc geninfo_all_blocks=1 00:11:32.229 --rc geninfo_unexecuted_blocks=1 00:11:32.229 00:11:32.229 ' 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:32.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.229 --rc genhtml_branch_coverage=1 00:11:32.229 --rc genhtml_function_coverage=1 00:11:32.229 --rc genhtml_legend=1 00:11:32.229 --rc geninfo_all_blocks=1 00:11:32.229 --rc geninfo_unexecuted_blocks=1 00:11:32.229 00:11:32.229 ' 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:32.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.229 --rc genhtml_branch_coverage=1 00:11:32.229 --rc genhtml_function_coverage=1 00:11:32.229 --rc genhtml_legend=1 00:11:32.229 --rc geninfo_all_blocks=1 00:11:32.229 --rc geninfo_unexecuted_blocks=1 00:11:32.229 00:11:32.229 ' 00:11:32.229 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:32.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.230 --rc genhtml_branch_coverage=1 00:11:32.230 --rc genhtml_function_coverage=1 00:11:32.230 --rc genhtml_legend=1 00:11:32.230 --rc geninfo_all_blocks=1 00:11:32.230 --rc geninfo_unexecuted_blocks=1 00:11:32.230 00:11:32.230 ' 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.230 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:32.230 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:32.231 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:32.231 Cannot find device "nvmf_init_br" 00:11:32.231 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:32.231 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:32.231 Cannot find device "nvmf_init_br2" 00:11:32.231 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:32.231 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:32.231 Cannot find device "nvmf_tgt_br" 00:11:32.231 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:32.231 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:32.231 Cannot find device "nvmf_tgt_br2" 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:32.491 Cannot find device "nvmf_init_br" 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:32.491 Cannot find device "nvmf_init_br2" 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:32.491 Cannot find device "nvmf_tgt_br" 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:32.491 Cannot find device "nvmf_tgt_br2" 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:32.491 Cannot find device "nvmf_br" 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:32.491 Cannot find device "nvmf_init_if" 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:32.491 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:32.491 Cannot find device "nvmf_init_if2" 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:32.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:32.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:32.492 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:32.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:32.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:11:32.751 00:11:32.751 --- 10.0.0.3 ping statistics --- 00:11:32.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.751 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:32.751 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:32.751 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:11:32.751 00:11:32.751 --- 10.0.0.4 ping statistics --- 00:11:32.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.751 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:32.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:11:32.751 00:11:32.751 --- 10.0.0.1 ping statistics --- 00:11:32.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.751 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:32.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:11:32.751 00:11:32.751 --- 10.0.0.2 ping statistics --- 00:11:32.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.751 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66450 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66450 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66450 ']' 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.751 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.751 [2024-11-20 08:45:03.631237] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:32.751 [2024-11-20 08:45:03.631362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.009 [2024-11-20 08:45:03.787093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.009 [2024-11-20 08:45:03.850246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.009 [2024-11-20 08:45:03.850577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.009 [2024-11-20 08:45:03.850737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.010 [2024-11-20 08:45:03.850828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.010 [2024-11-20 08:45:03.850955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.010 [2024-11-20 08:45:03.852322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.010 [2024-11-20 08:45:03.852414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.010 [2024-11-20 08:45:03.852495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.010 [2024-11-20 08:45:03.852493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.010 [2024-11-20 08:45:03.909872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:33.267 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.267 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:33.267 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.267 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.267 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.267 08:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.267 08:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:33.526 [2024-11-20 08:45:04.292395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.526 08:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:33.784 08:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:33.784 08:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:34.351 08:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:34.351 08:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:34.609 08:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:34.610 08:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:34.868 08:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:34.868 08:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:35.126 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:35.694 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:35.694 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:35.954 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:35.954 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:36.212 08:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:36.212 08:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:36.779 08:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.039 08:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:37.039 08:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:37.299 08:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:37.299 08:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.559 08:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:37.818 [2024-11-20 08:45:08.568759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:37.818 08:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:38.076 08:45:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:38.690 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:38.690 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:38.690 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:38.690 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.690 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:38.690 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:38.690 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:40.623 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:40.623 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:40.623 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.623 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:40.623 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.623 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:40.623 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:40.623 [global] 00:11:40.623 thread=1 00:11:40.623 invalidate=1 00:11:40.623 rw=write 00:11:40.623 time_based=1 00:11:40.623 runtime=1 00:11:40.623 ioengine=libaio 00:11:40.623 direct=1 00:11:40.623 bs=4096 00:11:40.623 iodepth=1 00:11:40.623 norandommap=0 00:11:40.623 numjobs=1 00:11:40.623 00:11:40.623 verify_dump=1 00:11:40.623 verify_backlog=512 00:11:40.623 verify_state_save=0 00:11:40.623 do_verify=1 00:11:40.623 verify=crc32c-intel 00:11:40.623 [job0] 00:11:40.623 filename=/dev/nvme0n1 00:11:40.623 [job1] 00:11:40.623 filename=/dev/nvme0n2 00:11:40.623 [job2] 00:11:40.623 filename=/dev/nvme0n3 00:11:40.623 [job3] 00:11:40.623 filename=/dev/nvme0n4 00:11:40.623 Could not set queue depth (nvme0n1) 00:11:40.623 Could not set queue depth (nvme0n2) 00:11:40.623 Could not set queue depth (nvme0n3) 00:11:40.623 Could not set queue depth (nvme0n4) 00:11:40.882 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.882 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.882 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.882 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.882 fio-3.35 00:11:40.882 Starting 4 threads 00:11:42.257 00:11:42.257 job0: (groupid=0, jobs=1): err= 0: pid=66632: Wed Nov 20 08:45:12 2024 00:11:42.257 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:42.257 slat (nsec): min=11721, max=65602, avg=16318.57, stdev=6123.54 00:11:42.257 clat (usec): min=165, max=570, avg=237.05, stdev=32.44 00:11:42.257 lat (usec): min=181, max=586, avg=253.37, stdev=33.38 00:11:42.257 clat percentiles (usec): 00:11:42.257 | 1.00th=[ 182], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 215], 00:11:42.257 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:11:42.257 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 297], 00:11:42.257 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 486], 99.95th=[ 529], 00:11:42.257 | 99.99th=[ 570] 00:11:42.257 write: IOPS=2485, BW=9942KiB/s (10.2MB/s)(9952KiB/1001msec); 0 zone resets 00:11:42.257 slat (usec): min=14, max=129, avg=21.26, stdev= 7.84 00:11:42.257 clat (usec): min=31, max=1814, avg=168.87, stdev=48.19 00:11:42.257 lat (usec): min=124, max=1845, avg=190.13, stdev=49.39 00:11:42.257 clat percentiles (usec): 00:11:42.257 | 1.00th=[ 115], 5.00th=[ 124], 10.00th=[ 131], 20.00th=[ 143], 00:11:42.257 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 172], 00:11:42.257 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 221], 00:11:42.257 | 99.00th=[ 265], 99.50th=[ 326], 99.90th=[ 515], 99.95th=[ 519], 00:11:42.257 | 99.99th=[ 1811] 00:11:42.257 bw ( KiB/s): min= 9120, max= 9120, per=27.96%, avg=9120.00, stdev= 0.00, samples=1 00:11:42.257 iops : min= 2280, max= 2280, avg=2280.00, stdev= 0.00, samples=1 00:11:42.257 lat (usec) : 50=0.02%, 100=0.07%, 250=86.90%, 500=12.87%, 750=0.11% 00:11:42.257 lat (msec) : 2=0.02% 00:11:42.257 cpu : usr=2.20%, sys=6.30%, ctx=4556, majf=0, minf=9 00:11:42.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.257 issued rwts: total=2048,2488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.257 job1: (groupid=0, jobs=1): err= 0: pid=66633: Wed Nov 20 08:45:12 2024 00:11:42.257 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:42.257 slat (nsec): min=9107, max=52227, avg=15886.02, stdev=5350.93 00:11:42.257 clat (usec): min=149, max=7404, avg=327.05, stdev=215.74 00:11:42.257 lat (usec): min=186, max=7420, avg=342.94, stdev=215.44 00:11:42.257 clat percentiles (usec): 00:11:42.257 | 1.00th=[ 184], 5.00th=[ 212], 10.00th=[ 227], 20.00th=[ 249], 00:11:42.257 | 30.00th=[ 281], 40.00th=[ 318], 50.00th=[ 338], 60.00th=[ 351], 00:11:42.257 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 392], 95.00th=[ 400], 00:11:42.257 | 99.00th=[ 461], 99.50th=[ 510], 99.90th=[ 4146], 99.95th=[ 7373], 00:11:42.257 | 99.99th=[ 7373] 00:11:42.257 write: IOPS=1729, BW=6917KiB/s (7083kB/s)(6924KiB/1001msec); 0 zone resets 00:11:42.257 slat (usec): min=11, max=137, avg=23.39, stdev= 8.71 00:11:42.257 clat (usec): min=109, max=938, avg=246.28, stdev=59.89 00:11:42.257 lat (usec): min=131, max=958, avg=269.67, stdev=58.72 00:11:42.257 clat percentiles (usec): 00:11:42.257 | 1.00th=[ 147], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 196], 00:11:42.257 | 30.00th=[ 210], 40.00th=[ 225], 50.00th=[ 241], 60.00th=[ 255], 00:11:42.257 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 318], 95.00th=[ 338], 00:11:42.257 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 783], 99.95th=[ 938], 00:11:42.257 | 99.99th=[ 938] 00:11:42.257 bw ( KiB/s): min= 8192, max= 8192, per=25.11%, avg=8192.00, stdev= 0.00, samples=1 00:11:42.257 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:42.257 lat (usec) : 250=39.46%, 500=60.12%, 750=0.28%, 1000=0.09% 00:11:42.257 lat (msec) : 10=0.06% 00:11:42.257 cpu : usr=2.20%, sys=4.60%, ctx=3267, majf=0, minf=10 00:11:42.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.257 issued rwts: total=1536,1731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.257 job2: (groupid=0, jobs=1): err= 0: pid=66634: Wed Nov 20 08:45:12 2024 00:11:42.257 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:42.257 slat (nsec): min=11136, max=45486, avg=14729.10, stdev=3332.72 00:11:42.258 clat (usec): min=172, max=632, avg=244.92, stdev=35.49 00:11:42.258 lat (usec): min=185, max=652, avg=259.65, stdev=36.36 00:11:42.258 clat percentiles (usec): 00:11:42.258 | 1.00th=[ 184], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 221], 00:11:42.258 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:11:42.258 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 302], 00:11:42.258 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 578], 99.95th=[ 611], 00:11:42.258 | 99.99th=[ 635] 00:11:42.258 write: IOPS=2159, BW=8639KiB/s (8847kB/s)(8648KiB/1001msec); 0 zone resets 00:11:42.258 slat (usec): min=13, max=114, avg=21.81, stdev= 5.04 00:11:42.258 clat (usec): min=120, max=373, avg=191.57, stdev=27.30 00:11:42.258 lat (usec): min=137, max=488, avg=213.38, stdev=28.82 00:11:42.258 clat percentiles (usec): 00:11:42.258 | 1.00th=[ 130], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 169], 00:11:42.258 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:11:42.258 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 241], 00:11:42.258 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 302], 99.95th=[ 338], 00:11:42.258 | 99.99th=[ 375] 00:11:42.258 bw ( KiB/s): min= 8304, max= 8304, per=25.45%, avg=8304.00, stdev= 0.00, samples=1 00:11:42.258 iops : min= 2076, max= 2076, avg=2076.00, stdev= 0.00, samples=1 00:11:42.258 lat (usec) : 250=80.86%, 500=19.05%, 750=0.10% 00:11:42.258 cpu : usr=2.00%, sys=5.50%, ctx=4210, majf=0, minf=11 00:11:42.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.258 issued rwts: total=2048,2162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.258 job3: (groupid=0, jobs=1): err= 0: pid=66635: Wed Nov 20 08:45:12 2024 00:11:42.258 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:42.258 slat (nsec): min=12211, max=56191, avg=18774.44, stdev=5156.57 00:11:42.258 clat (usec): min=179, max=1834, avg=318.80, stdev=69.92 00:11:42.258 lat (usec): min=192, max=1852, avg=337.58, stdev=71.76 00:11:42.258 clat percentiles (usec): 00:11:42.258 | 1.00th=[ 198], 5.00th=[ 221], 10.00th=[ 235], 20.00th=[ 255], 00:11:42.258 | 30.00th=[ 289], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 343], 00:11:42.258 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 383], 95.00th=[ 396], 00:11:42.258 | 99.00th=[ 465], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 1827], 00:11:42.258 | 99.99th=[ 1827] 00:11:42.258 write: IOPS=1781, BW=7125KiB/s (7296kB/s)(7132KiB/1001msec); 0 zone resets 00:11:42.258 slat (usec): min=11, max=140, avg=25.02, stdev= 8.74 00:11:42.258 clat (usec): min=128, max=962, avg=241.02, stdev=54.66 00:11:42.258 lat (usec): min=148, max=977, avg=266.04, stdev=57.07 00:11:42.258 clat percentiles (usec): 00:11:42.258 | 1.00th=[ 151], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 196], 00:11:42.258 | 30.00th=[ 210], 40.00th=[ 223], 50.00th=[ 237], 60.00th=[ 249], 00:11:42.258 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 326], 00:11:42.258 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 898], 99.95th=[ 963], 00:11:42.258 | 99.99th=[ 963] 00:11:42.258 bw ( KiB/s): min= 8192, max= 8192, per=25.11%, avg=8192.00, stdev= 0.00, samples=1 00:11:42.258 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:42.258 lat (usec) : 250=40.46%, 500=59.17%, 750=0.27%, 1000=0.06% 00:11:42.258 lat (msec) : 2=0.03% 00:11:42.258 cpu : usr=1.30%, sys=6.70%, ctx=3319, majf=0, minf=13 00:11:42.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.258 issued rwts: total=1536,1783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.258 00:11:42.258 Run status group 0 (all jobs): 00:11:42.258 READ: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:11:42.258 WRITE: bw=31.9MiB/s (33.4MB/s), 6917KiB/s-9942KiB/s (7083kB/s-10.2MB/s), io=31.9MiB (33.4MB), run=1001-1001msec 00:11:42.258 00:11:42.258 Disk stats (read/write): 00:11:42.258 nvme0n1: ios=1768/2048, merge=0/0, ticks=451/369, in_queue=820, util=86.46% 00:11:42.258 nvme0n2: ios=1228/1536, merge=0/0, ticks=362/350, in_queue=712, util=85.45% 00:11:42.258 nvme0n3: ios=1536/2005, merge=0/0, ticks=383/402, in_queue=785, util=88.89% 00:11:42.258 nvme0n4: ios=1256/1536, merge=0/0, ticks=399/386, in_queue=785, util=89.55% 00:11:42.258 08:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:42.258 [global] 00:11:42.258 thread=1 00:11:42.258 invalidate=1 00:11:42.258 rw=randwrite 00:11:42.258 time_based=1 00:11:42.258 runtime=1 00:11:42.258 ioengine=libaio 00:11:42.258 direct=1 00:11:42.258 bs=4096 00:11:42.258 iodepth=1 00:11:42.258 norandommap=0 00:11:42.258 numjobs=1 00:11:42.258 00:11:42.258 verify_dump=1 00:11:42.258 verify_backlog=512 00:11:42.258 verify_state_save=0 00:11:42.258 do_verify=1 00:11:42.258 verify=crc32c-intel 00:11:42.258 [job0] 00:11:42.258 filename=/dev/nvme0n1 00:11:42.258 [job1] 00:11:42.258 filename=/dev/nvme0n2 00:11:42.258 [job2] 00:11:42.258 filename=/dev/nvme0n3 00:11:42.258 [job3] 00:11:42.258 filename=/dev/nvme0n4 00:11:42.258 Could not set queue depth (nvme0n1) 00:11:42.258 Could not set queue depth (nvme0n2) 00:11:42.258 Could not set queue depth (nvme0n3) 00:11:42.258 Could not set queue depth (nvme0n4) 00:11:42.258 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.258 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.258 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.258 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.258 fio-3.35 00:11:42.258 Starting 4 threads 00:11:43.636 00:11:43.636 job0: (groupid=0, jobs=1): err= 0: pid=66694: Wed Nov 20 08:45:14 2024 00:11:43.636 read: IOPS=1818, BW=7273KiB/s (7447kB/s)(7280KiB/1001msec) 00:11:43.636 slat (nsec): min=11254, max=46163, avg=14092.40, stdev=3781.34 00:11:43.636 clat (usec): min=222, max=406, avg=273.19, stdev=19.59 00:11:43.636 lat (usec): min=236, max=421, avg=287.29, stdev=20.45 00:11:43.636 clat percentiles (usec): 00:11:43.636 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 258], 00:11:43.636 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:11:43.636 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:11:43.636 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 379], 99.95th=[ 408], 00:11:43.636 | 99.99th=[ 408] 00:11:43.636 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:43.636 slat (usec): min=9, max=115, avg=18.29, stdev= 5.17 00:11:43.636 clat (usec): min=135, max=1791, avg=211.46, stdev=43.13 00:11:43.636 lat (usec): min=176, max=1808, avg=229.75, stdev=43.62 00:11:43.636 clat percentiles (usec): 00:11:43.636 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 196], 00:11:43.636 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:11:43.636 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 245], 00:11:43.636 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 717], 99.95th=[ 750], 00:11:43.636 | 99.99th=[ 1795] 00:11:43.636 bw ( KiB/s): min= 8192, max= 8192, per=21.90%, avg=8192.00, stdev= 0.00, samples=1 00:11:43.636 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:43.636 lat (usec) : 250=55.87%, 500=44.05%, 750=0.03%, 1000=0.03% 00:11:43.636 lat (msec) : 2=0.03% 00:11:43.636 cpu : usr=2.00%, sys=5.10%, ctx=3868, majf=0, minf=17 00:11:43.636 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.636 issued rwts: total=1820,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.636 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.636 job1: (groupid=0, jobs=1): err= 0: pid=66695: Wed Nov 20 08:45:14 2024 00:11:43.636 read: IOPS=1819, BW=7277KiB/s (7451kB/s)(7284KiB/1001msec) 00:11:43.636 slat (nsec): min=7617, max=36453, avg=10260.71, stdev=3140.04 00:11:43.636 clat (usec): min=177, max=411, avg=277.38, stdev=20.34 00:11:43.636 lat (usec): min=205, max=421, avg=287.64, stdev=20.99 00:11:43.636 clat percentiles (usec): 00:11:43.636 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:11:43.636 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:11:43.636 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 314], 00:11:43.636 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 355], 99.95th=[ 412], 00:11:43.636 | 99.99th=[ 412] 00:11:43.636 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:43.636 slat (nsec): min=9934, max=66622, avg=17243.05, stdev=5585.96 00:11:43.636 clat (usec): min=169, max=1714, avg=212.52, stdev=40.52 00:11:43.636 lat (usec): min=183, max=1727, avg=229.77, stdev=41.19 00:11:43.636 clat percentiles (usec): 00:11:43.636 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 196], 00:11:43.636 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:11:43.636 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 245], 00:11:43.636 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 502], 99.95th=[ 725], 00:11:43.636 | 99.99th=[ 1713] 00:11:43.636 bw ( KiB/s): min= 8192, max= 8192, per=21.90%, avg=8192.00, stdev= 0.00, samples=1 00:11:43.636 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:43.636 lat (usec) : 250=54.64%, 500=45.28%, 750=0.05% 00:11:43.636 lat (msec) : 2=0.03% 00:11:43.636 cpu : usr=1.60%, sys=4.30%, ctx=3869, majf=0, minf=11 00:11:43.636 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.636 issued rwts: total=1821,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.636 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.636 job2: (groupid=0, jobs=1): err= 0: pid=66696: Wed Nov 20 08:45:14 2024 00:11:43.636 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:43.636 slat (nsec): min=11045, max=62252, avg=13453.58, stdev=3095.23 00:11:43.636 clat (usec): min=163, max=2411, avg=206.36, stdev=50.99 00:11:43.636 lat (usec): min=175, max=2439, avg=219.81, stdev=51.46 00:11:43.636 clat percentiles (usec): 00:11:43.636 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:11:43.636 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:11:43.637 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 239], 00:11:43.637 | 99.00th=[ 273], 99.50th=[ 297], 99.90th=[ 562], 99.95th=[ 1004], 00:11:43.637 | 99.99th=[ 2409] 00:11:43.637 write: IOPS=2600, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:11:43.637 slat (nsec): min=15932, max=94832, avg=19132.89, stdev=4937.55 00:11:43.637 clat (usec): min=110, max=472, avg=145.71, stdev=21.24 00:11:43.637 lat (usec): min=129, max=489, avg=164.84, stdev=22.61 00:11:43.637 clat percentiles (usec): 00:11:43.637 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:11:43.637 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:11:43.637 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 178], 00:11:43.637 | 99.00th=[ 206], 99.50th=[ 225], 99.90th=[ 408], 99.95th=[ 441], 00:11:43.637 | 99.99th=[ 474] 00:11:43.637 bw ( KiB/s): min=12288, max=12288, per=32.84%, avg=12288.00, stdev= 0.00, samples=1 00:11:43.637 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:43.637 lat (usec) : 250=98.51%, 500=1.41%, 750=0.04% 00:11:43.637 lat (msec) : 2=0.02%, 4=0.02% 00:11:43.637 cpu : usr=1.90%, sys=6.50%, ctx=5163, majf=0, minf=11 00:11:43.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.637 issued rwts: total=2560,2603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.637 job3: (groupid=0, jobs=1): err= 0: pid=66697: Wed Nov 20 08:45:14 2024 00:11:43.637 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:43.637 slat (nsec): min=10597, max=51301, avg=12269.20, stdev=2508.73 00:11:43.637 clat (usec): min=160, max=650, avg=202.10, stdev=22.12 00:11:43.637 lat (usec): min=172, max=661, avg=214.37, stdev=22.30 00:11:43.637 clat percentiles (usec): 00:11:43.637 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:11:43.637 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:11:43.637 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 235], 00:11:43.637 | 99.00th=[ 269], 99.50th=[ 297], 99.90th=[ 367], 99.95th=[ 486], 00:11:43.637 | 99.99th=[ 652] 00:11:43.637 write: IOPS=2661, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:11:43.637 slat (usec): min=13, max=104, avg=18.04, stdev= 3.91 00:11:43.637 clat (usec): min=113, max=321, avg=148.27, stdev=16.93 00:11:43.637 lat (usec): min=130, max=426, avg=166.31, stdev=17.98 00:11:43.637 clat percentiles (usec): 00:11:43.637 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 135], 00:11:43.637 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 151], 00:11:43.637 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 180], 00:11:43.637 | 99.00th=[ 200], 99.50th=[ 212], 99.90th=[ 241], 99.95th=[ 273], 00:11:43.637 | 99.99th=[ 322] 00:11:43.637 bw ( KiB/s): min=12288, max=12288, per=32.84%, avg=12288.00, stdev= 0.00, samples=1 00:11:43.637 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:43.637 lat (usec) : 250=98.95%, 500=1.03%, 750=0.02% 00:11:43.637 cpu : usr=2.10%, sys=6.10%, ctx=5224, majf=0, minf=7 00:11:43.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.637 issued rwts: total=2560,2664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:43.637 00:11:43.637 Run status group 0 (all jobs): 00:11:43.637 READ: bw=34.2MiB/s (35.8MB/s), 7273KiB/s-9.99MiB/s (7447kB/s-10.5MB/s), io=34.2MiB (35.9MB), run=1001-1001msec 00:11:43.637 WRITE: bw=36.5MiB/s (38.3MB/s), 8184KiB/s-10.4MiB/s (8380kB/s-10.9MB/s), io=36.6MiB (38.3MB), run=1001-1001msec 00:11:43.637 00:11:43.637 Disk stats (read/write): 00:11:43.637 nvme0n1: ios=1586/1819, merge=0/0, ticks=461/371, in_queue=832, util=89.27% 00:11:43.637 nvme0n2: ios=1585/1820, merge=0/0, ticks=429/362, in_queue=791, util=89.29% 00:11:43.637 nvme0n3: ios=2065/2494, merge=0/0, ticks=441/372, in_queue=813, util=89.43% 00:11:43.637 nvme0n4: ios=2048/2539, merge=0/0, ticks=424/402, in_queue=826, util=89.78% 00:11:43.637 08:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:43.637 [global] 00:11:43.637 thread=1 00:11:43.637 invalidate=1 00:11:43.637 rw=write 00:11:43.637 time_based=1 00:11:43.637 runtime=1 00:11:43.637 ioengine=libaio 00:11:43.637 direct=1 00:11:43.637 bs=4096 00:11:43.637 iodepth=128 00:11:43.637 norandommap=0 00:11:43.637 numjobs=1 00:11:43.637 00:11:43.637 verify_dump=1 00:11:43.637 verify_backlog=512 00:11:43.637 verify_state_save=0 00:11:43.637 do_verify=1 00:11:43.637 verify=crc32c-intel 00:11:43.637 [job0] 00:11:43.637 filename=/dev/nvme0n1 00:11:43.637 [job1] 00:11:43.637 filename=/dev/nvme0n2 00:11:43.637 [job2] 00:11:43.637 filename=/dev/nvme0n3 00:11:43.637 [job3] 00:11:43.637 filename=/dev/nvme0n4 00:11:43.637 Could not set queue depth (nvme0n1) 00:11:43.637 Could not set queue depth (nvme0n2) 00:11:43.637 Could not set queue depth (nvme0n3) 00:11:43.637 Could not set queue depth (nvme0n4) 00:11:43.637 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:43.637 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:43.637 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:43.637 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:43.637 fio-3.35 00:11:43.637 Starting 4 threads 00:11:45.014 00:11:45.014 job0: (groupid=0, jobs=1): err= 0: pid=66757: Wed Nov 20 08:45:15 2024 00:11:45.014 read: IOPS=1588, BW=6353KiB/s (6506kB/s)(6404KiB/1008msec) 00:11:45.014 slat (usec): min=4, max=22117, avg=318.74, stdev=1779.19 00:11:45.014 clat (usec): min=1606, max=75425, avg=39822.42, stdev=12616.45 00:11:45.014 lat (usec): min=8500, max=75447, avg=40141.16, stdev=12569.23 00:11:45.014 clat percentiles (usec): 00:11:45.014 | 1.00th=[ 8717], 5.00th=[24511], 10.00th=[27395], 20.00th=[30278], 00:11:45.014 | 30.00th=[31589], 40.00th=[35390], 50.00th=[40633], 60.00th=[42730], 00:11:45.014 | 70.00th=[44827], 80.00th=[46400], 90.00th=[51119], 95.00th=[69731], 00:11:45.014 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:11:45.014 | 99.99th=[74974] 00:11:45.014 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:11:45.014 slat (usec): min=11, max=10706, avg=234.23, stdev=1239.41 00:11:45.014 clat (usec): min=17880, max=43576, avg=30336.12, stdev=7116.96 00:11:45.014 lat (usec): min=21467, max=43597, avg=30570.35, stdev=7058.79 00:11:45.014 clat percentiles (usec): 00:11:45.014 | 1.00th=[19530], 5.00th=[23200], 10.00th=[23462], 20.00th=[23987], 00:11:45.014 | 30.00th=[24511], 40.00th=[26346], 50.00th=[27657], 60.00th=[28443], 00:11:45.014 | 70.00th=[33817], 80.00th=[39584], 90.00th=[41681], 95.00th=[42730], 00:11:45.014 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:11:45.014 | 99.99th=[43779] 00:11:45.014 bw ( KiB/s): min= 7446, max= 8448, per=15.36%, avg=7947.00, stdev=708.52, samples=2 00:11:45.014 iops : min= 1861, max= 2112, avg=1986.50, stdev=177.48, samples=2 00:11:45.014 lat (msec) : 2=0.03%, 10=0.88%, 20=1.51%, 50=92.41%, 100=5.18% 00:11:45.014 cpu : usr=1.79%, sys=4.97%, ctx=116, majf=0, minf=8 00:11:45.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:11:45.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.014 issued rwts: total=1601,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.014 job1: (groupid=0, jobs=1): err= 0: pid=66758: Wed Nov 20 08:45:15 2024 00:11:45.014 read: IOPS=6196, BW=24.2MiB/s (25.4MB/s)(24.3MiB/1002msec) 00:11:45.014 slat (usec): min=4, max=3762, avg=76.00, stdev=336.03 00:11:45.014 clat (usec): min=900, max=13907, avg=10143.51, stdev=906.64 00:11:45.014 lat (usec): min=920, max=14166, avg=10219.51, stdev=915.93 00:11:45.014 clat percentiles (usec): 00:11:45.014 | 1.00th=[ 6980], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9896], 00:11:45.014 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:11:45.014 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10814], 95.00th=[11338], 00:11:45.014 | 99.00th=[12518], 99.50th=[13042], 99.90th=[13566], 99.95th=[13698], 00:11:45.014 | 99.99th=[13960] 00:11:45.014 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:11:45.014 slat (usec): min=10, max=4099, avg=71.91, stdev=398.04 00:11:45.014 clat (usec): min=5040, max=14137, avg=9576.66, stdev=854.55 00:11:45.014 lat (usec): min=5079, max=14183, avg=9648.57, stdev=930.42 00:11:45.014 clat percentiles (usec): 00:11:45.014 | 1.00th=[ 6980], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 9110], 00:11:45.014 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9634], 00:11:45.014 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10945], 00:11:45.014 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13435], 99.95th=[13698], 00:11:45.014 | 99.99th=[14091] 00:11:45.014 bw ( KiB/s): min=26280, max=26968, per=51.47%, avg=26624.00, stdev=486.49, samples=2 00:11:45.014 iops : min= 6570, max= 6742, avg=6656.00, stdev=121.62, samples=2 00:11:45.014 lat (usec) : 1000=0.02% 00:11:45.014 lat (msec) : 4=0.22%, 10=55.79%, 20=43.98% 00:11:45.014 cpu : usr=5.69%, sys=16.28%, ctx=383, majf=0, minf=1 00:11:45.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:45.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.014 issued rwts: total=6209,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.014 job2: (groupid=0, jobs=1): err= 0: pid=66759: Wed Nov 20 08:45:15 2024 00:11:45.014 read: IOPS=2023, BW=8095KiB/s (8289kB/s)(8192KiB/1012msec) 00:11:45.014 slat (usec): min=4, max=10319, avg=202.51, stdev=1059.81 00:11:45.014 clat (usec): min=13046, max=52441, avg=25511.54, stdev=6920.28 00:11:45.014 lat (usec): min=13062, max=52483, avg=25714.05, stdev=6964.80 00:11:45.014 clat percentiles (usec): 00:11:45.014 | 1.00th=[16057], 5.00th=[19268], 10.00th=[20579], 20.00th=[22152], 00:11:45.014 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:11:45.014 | 70.00th=[23462], 80.00th=[27132], 90.00th=[38011], 95.00th=[42730], 00:11:45.014 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[45351], 00:11:45.014 | 99.99th=[52691] 00:11:45.014 write: IOPS=2307, BW=9229KiB/s (9451kB/s)(9340KiB/1012msec); 0 zone resets 00:11:45.014 slat (usec): min=11, max=9405, avg=242.18, stdev=1139.43 00:11:45.014 clat (msec): min=8, max=101, avg=32.29, stdev=21.97 00:11:45.014 lat (msec): min=10, max=101, avg=32.53, stdev=22.11 00:11:45.014 clat percentiles (msec): 00:11:45.014 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 16], 20.00th=[ 18], 00:11:45.014 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 22], 60.00th=[ 22], 00:11:45.014 | 70.00th=[ 34], 80.00th=[ 45], 90.00th=[ 72], 95.00th=[ 86], 00:11:45.014 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 103], 99.95th=[ 103], 00:11:45.014 | 99.99th=[ 103] 00:11:45.014 bw ( KiB/s): min= 5368, max=12288, per=17.07%, avg=8828.00, stdev=4893.18, samples=2 00:11:45.014 iops : min= 1342, max= 3072, avg=2207.00, stdev=1223.29, samples=2 00:11:45.014 lat (msec) : 10=0.02%, 20=16.72%, 50=74.24%, 100=8.69%, 250=0.32% 00:11:45.014 cpu : usr=3.36%, sys=6.53%, ctx=206, majf=0, minf=1 00:11:45.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:45.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.014 issued rwts: total=2048,2335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.014 job3: (groupid=0, jobs=1): err= 0: pid=66760: Wed Nov 20 08:45:15 2024 00:11:45.014 read: IOPS=1581, BW=6325KiB/s (6477kB/s)(6376KiB/1008msec) 00:11:45.014 slat (usec): min=4, max=21998, avg=316.31, stdev=1774.96 00:11:45.014 clat (usec): min=7460, max=75356, avg=39852.35, stdev=12576.67 00:11:45.014 lat (usec): min=7470, max=75368, avg=40168.66, stdev=12539.17 00:11:45.014 clat percentiles (usec): 00:11:45.014 | 1.00th=[ 7701], 5.00th=[25560], 10.00th=[27132], 20.00th=[30278], 00:11:45.014 | 30.00th=[31327], 40.00th=[35390], 50.00th=[40633], 60.00th=[42730], 00:11:45.014 | 70.00th=[44827], 80.00th=[46400], 90.00th=[50594], 95.00th=[69731], 00:11:45.014 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:11:45.014 | 99.99th=[74974] 00:11:45.014 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:11:45.014 slat (usec): min=10, max=10684, avg=234.28, stdev=1231.43 00:11:45.014 clat (usec): min=18153, max=43350, avg=30416.81, stdev=7110.97 00:11:45.014 lat (usec): min=20652, max=43381, avg=30651.10, stdev=7056.92 00:11:45.015 clat percentiles (usec): 00:11:45.015 | 1.00th=[19530], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:11:45.015 | 30.00th=[24511], 40.00th=[26870], 50.00th=[27657], 60.00th=[28967], 00:11:45.015 | 70.00th=[33817], 80.00th=[39584], 90.00th=[42206], 95.00th=[42730], 00:11:45.015 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:45.015 | 99.99th=[43254] 00:11:45.015 bw ( KiB/s): min= 7432, max= 8400, per=15.30%, avg=7916.00, stdev=684.48, samples=2 00:11:45.015 iops : min= 1858, max= 2100, avg=1979.00, stdev=171.12, samples=2 00:11:45.015 lat (msec) : 10=0.71%, 20=1.48%, 50=92.61%, 100=5.19% 00:11:45.015 cpu : usr=2.48%, sys=5.76%, ctx=114, majf=0, minf=5 00:11:45.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:11:45.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.015 issued rwts: total=1594,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.015 00:11:45.015 Run status group 0 (all jobs): 00:11:45.015 READ: bw=44.2MiB/s (46.4MB/s), 6325KiB/s-24.2MiB/s (6477kB/s-25.4MB/s), io=44.7MiB (46.9MB), run=1002-1012msec 00:11:45.015 WRITE: bw=50.5MiB/s (53.0MB/s), 8127KiB/s-25.9MiB/s (8322kB/s-27.2MB/s), io=51.1MiB (53.6MB), run=1002-1012msec 00:11:45.015 00:11:45.015 Disk stats (read/write): 00:11:45.015 nvme0n1: ios=1426/1536, merge=0/0, ticks=13326/10180, in_queue=23506, util=86.56% 00:11:45.015 nvme0n2: ios=5273/5632, merge=0/0, ticks=25449/21959, in_queue=47408, util=87.32% 00:11:45.015 nvme0n3: ios=1913/2048, merge=0/0, ticks=24701/25897, in_queue=50598, util=88.82% 00:11:45.015 nvme0n4: ios=1376/1536, merge=0/0, ticks=14177/11430, in_queue=25607, util=89.58% 00:11:45.015 08:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:45.015 [global] 00:11:45.015 thread=1 00:11:45.015 invalidate=1 00:11:45.015 rw=randwrite 00:11:45.015 time_based=1 00:11:45.015 runtime=1 00:11:45.015 ioengine=libaio 00:11:45.015 direct=1 00:11:45.015 bs=4096 00:11:45.015 iodepth=128 00:11:45.015 norandommap=0 00:11:45.015 numjobs=1 00:11:45.015 00:11:45.015 verify_dump=1 00:11:45.015 verify_backlog=512 00:11:45.015 verify_state_save=0 00:11:45.015 do_verify=1 00:11:45.015 verify=crc32c-intel 00:11:45.015 [job0] 00:11:45.015 filename=/dev/nvme0n1 00:11:45.015 [job1] 00:11:45.015 filename=/dev/nvme0n2 00:11:45.015 [job2] 00:11:45.015 filename=/dev/nvme0n3 00:11:45.015 [job3] 00:11:45.015 filename=/dev/nvme0n4 00:11:45.015 Could not set queue depth (nvme0n1) 00:11:45.015 Could not set queue depth (nvme0n2) 00:11:45.015 Could not set queue depth (nvme0n3) 00:11:45.015 Could not set queue depth (nvme0n4) 00:11:45.015 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.015 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.015 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.015 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.015 fio-3.35 00:11:45.015 Starting 4 threads 00:11:46.393 00:11:46.393 job0: (groupid=0, jobs=1): err= 0: pid=66819: Wed Nov 20 08:45:17 2024 00:11:46.393 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:11:46.393 slat (usec): min=7, max=5869, avg=83.99, stdev=512.59 00:11:46.393 clat (usec): min=2876, max=19304, avg=11778.20, stdev=1326.71 00:11:46.393 lat (usec): min=2900, max=23002, avg=11862.20, stdev=1347.28 00:11:46.393 clat percentiles (usec): 00:11:46.393 | 1.00th=[ 7373], 5.00th=[10421], 10.00th=[11076], 20.00th=[11469], 00:11:46.393 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:11:46.393 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:11:46.393 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19268], 99.95th=[19268], 00:11:46.393 | 99.99th=[19268] 00:11:46.393 write: IOPS=5625, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:11:46.393 slat (usec): min=10, max=7402, avg=85.35, stdev=487.52 00:11:46.393 clat (usec): min=1812, max=14985, avg=10752.30, stdev=1099.43 00:11:46.393 lat (usec): min=1846, max=15276, avg=10837.64, stdev=1009.90 00:11:46.393 clat percentiles (usec): 00:11:46.393 | 1.00th=[ 7111], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:11:46.393 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:11:46.393 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[11994], 00:11:46.393 | 99.00th=[14484], 99.50th=[14615], 99.90th=[15008], 99.95th=[15008], 00:11:46.393 | 99.99th=[15008] 00:11:46.393 bw ( KiB/s): min=21512, max=23591, per=34.63%, avg=22551.50, stdev=1470.07, samples=2 00:11:46.393 iops : min= 5378, max= 5897, avg=5637.50, stdev=366.99, samples=2 00:11:46.393 lat (msec) : 2=0.07%, 4=0.09%, 10=12.08%, 20=87.76% 00:11:46.393 cpu : usr=5.59%, sys=14.67%, ctx=243, majf=0, minf=3 00:11:46.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:46.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.393 issued rwts: total=5632,5642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.393 job1: (groupid=0, jobs=1): err= 0: pid=66820: Wed Nov 20 08:45:17 2024 00:11:46.393 read: IOPS=5422, BW=21.2MiB/s (22.2MB/s)(21.2MiB/1003msec) 00:11:46.393 slat (usec): min=7, max=5685, avg=88.66, stdev=392.09 00:11:46.393 clat (usec): min=756, max=16843, avg=11726.25, stdev=1221.85 00:11:46.393 lat (usec): min=2341, max=17817, avg=11814.91, stdev=1233.78 00:11:46.393 clat percentiles (usec): 00:11:46.393 | 1.00th=[ 6521], 5.00th=[10159], 10.00th=[10683], 20.00th=[11338], 00:11:46.393 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:11:46.393 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12518], 95.00th=[13304], 00:11:46.393 | 99.00th=[14746], 99.50th=[15139], 99.90th=[15795], 99.95th=[16712], 00:11:46.393 | 99.99th=[16909] 00:11:46.393 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:11:46.393 slat (usec): min=10, max=4689, avg=83.68, stdev=465.24 00:11:46.393 clat (usec): min=5678, max=16357, avg=11167.06, stdev=973.69 00:11:46.393 lat (usec): min=5694, max=16403, avg=11250.74, stdev=1067.80 00:11:46.393 clat percentiles (usec): 00:11:46.393 | 1.00th=[ 8094], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10683], 00:11:46.393 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:11:46.393 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11863], 95.00th=[12649], 00:11:46.394 | 99.00th=[14615], 99.50th=[15139], 99.90th=[15926], 99.95th=[16188], 00:11:46.394 | 99.99th=[16319] 00:11:46.394 bw ( KiB/s): min=22344, max=22757, per=34.63%, avg=22550.50, stdev=292.04, samples=2 00:11:46.394 iops : min= 5586, max= 5689, avg=5637.50, stdev=72.83, samples=2 00:11:46.394 lat (usec) : 1000=0.01% 00:11:46.394 lat (msec) : 4=0.20%, 10=5.38%, 20=94.41% 00:11:46.394 cpu : usr=4.69%, sys=16.77%, ctx=338, majf=0, minf=5 00:11:46.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:46.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.394 issued rwts: total=5439,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.394 job2: (groupid=0, jobs=1): err= 0: pid=66821: Wed Nov 20 08:45:17 2024 00:11:46.394 read: IOPS=2793, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1007msec) 00:11:46.394 slat (usec): min=7, max=21591, avg=177.44, stdev=1318.87 00:11:46.394 clat (usec): min=1725, max=45867, avg=24059.21, stdev=5342.41 00:11:46.394 lat (usec): min=11977, max=48243, avg=24236.65, stdev=5420.12 00:11:46.394 clat percentiles (usec): 00:11:46.394 | 1.00th=[12911], 5.00th=[17171], 10.00th=[17957], 20.00th=[18482], 00:11:46.394 | 30.00th=[19006], 40.00th=[23462], 50.00th=[25822], 60.00th=[26608], 00:11:46.394 | 70.00th=[26870], 80.00th=[27657], 90.00th=[29754], 95.00th=[34866], 00:11:46.394 | 99.00th=[35914], 99.50th=[37487], 99.90th=[43779], 99.95th=[45351], 00:11:46.394 | 99.99th=[45876] 00:11:46.394 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:11:46.394 slat (usec): min=5, max=16936, avg=156.33, stdev=1071.78 00:11:46.394 clat (usec): min=7315, max=35902, avg=19489.79, stdev=5711.62 00:11:46.394 lat (usec): min=10407, max=35951, avg=19646.12, stdev=5661.94 00:11:46.394 clat percentiles (usec): 00:11:46.394 | 1.00th=[11600], 5.00th=[12518], 10.00th=[13173], 20.00th=[13829], 00:11:46.394 | 30.00th=[14353], 40.00th=[15664], 50.00th=[18744], 60.00th=[20579], 00:11:46.394 | 70.00th=[24773], 80.00th=[25560], 90.00th=[26608], 95.00th=[27657], 00:11:46.394 | 99.00th=[31589], 99.50th=[31851], 99.90th=[32113], 99.95th=[34866], 00:11:46.394 | 99.99th=[35914] 00:11:46.394 bw ( KiB/s): min=11256, max=13320, per=18.87%, avg=12288.00, stdev=1459.47, samples=2 00:11:46.394 iops : min= 2814, max= 3330, avg=3072.00, stdev=364.87, samples=2 00:11:46.394 lat (msec) : 2=0.02%, 10=0.32%, 20=47.61%, 50=52.05% 00:11:46.394 cpu : usr=2.19%, sys=9.15%, ctx=118, majf=0, minf=5 00:11:46.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:46.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.394 issued rwts: total=2813,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.394 job3: (groupid=0, jobs=1): err= 0: pid=66822: Wed Nov 20 08:45:17 2024 00:11:46.394 read: IOPS=1965, BW=7861KiB/s (8049kB/s)(7908KiB/1006msec) 00:11:46.394 slat (usec): min=7, max=24458, avg=215.88, stdev=1357.81 00:11:46.394 clat (usec): min=577, max=73906, avg=28095.54, stdev=9407.27 00:11:46.394 lat (usec): min=5988, max=73947, avg=28311.42, stdev=9480.47 00:11:46.394 clat percentiles (usec): 00:11:46.394 | 1.00th=[ 6390], 5.00th=[19268], 10.00th=[21890], 20.00th=[25035], 00:11:46.394 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:11:46.394 | 70.00th=[27132], 80.00th=[27919], 90.00th=[38011], 95.00th=[51119], 00:11:46.394 | 99.00th=[66847], 99.50th=[67634], 99.90th=[71828], 99.95th=[73925], 00:11:46.394 | 99.99th=[73925] 00:11:46.394 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:11:46.394 slat (usec): min=13, max=16298, avg=272.13, stdev=1423.68 00:11:46.394 clat (usec): min=17617, max=86629, avg=34569.55, stdev=16620.01 00:11:46.394 lat (usec): min=17663, max=86659, avg=34841.67, stdev=16745.01 00:11:46.394 clat percentiles (usec): 00:11:46.394 | 1.00th=[19006], 5.00th=[21365], 10.00th=[23725], 20.00th=[24511], 00:11:46.394 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26608], 60.00th=[27657], 00:11:46.394 | 70.00th=[29492], 80.00th=[47449], 90.00th=[64750], 95.00th=[74974], 00:11:46.394 | 99.00th=[82314], 99.50th=[85459], 99.90th=[86508], 99.95th=[86508], 00:11:46.394 | 99.99th=[86508] 00:11:46.394 bw ( KiB/s): min= 5808, max=10597, per=12.60%, avg=8202.50, stdev=3386.33, samples=2 00:11:46.394 iops : min= 1452, max= 2649, avg=2050.50, stdev=846.41, samples=2 00:11:46.394 lat (usec) : 750=0.02% 00:11:46.394 lat (msec) : 10=1.57%, 20=3.03%, 50=82.66%, 100=12.72% 00:11:46.394 cpu : usr=2.29%, sys=6.87%, ctx=169, majf=0, minf=11 00:11:46.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:46.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.394 issued rwts: total=1977,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.394 00:11:46.394 Run status group 0 (all jobs): 00:11:46.394 READ: bw=61.5MiB/s (64.5MB/s), 7861KiB/s-21.9MiB/s (8049kB/s-23.0MB/s), io=62.0MiB (65.0MB), run=1003-1007msec 00:11:46.394 WRITE: bw=63.6MiB/s (66.7MB/s), 8143KiB/s-22.0MiB/s (8339kB/s-23.0MB/s), io=64.0MiB (67.1MB), run=1003-1007msec 00:11:46.394 00:11:46.394 Disk stats (read/write): 00:11:46.394 nvme0n1: ios=4658/4928, merge=0/0, ticks=50852/48326, in_queue=99178, util=87.17% 00:11:46.394 nvme0n2: ios=4657/4788, merge=0/0, ticks=25884/21324, in_queue=47208, util=88.32% 00:11:46.394 nvme0n3: ios=2184/2560, merge=0/0, ticks=52785/48413, in_queue=101198, util=88.83% 00:11:46.394 nvme0n4: ios=1536/1943, merge=0/0, ticks=19573/30907, in_queue=50480, util=89.47% 00:11:46.394 08:45:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:46.394 08:45:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66835 00:11:46.394 08:45:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:46.394 08:45:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:46.394 [global] 00:11:46.394 thread=1 00:11:46.394 invalidate=1 00:11:46.394 rw=read 00:11:46.394 time_based=1 00:11:46.394 runtime=10 00:11:46.394 ioengine=libaio 00:11:46.394 direct=1 00:11:46.394 bs=4096 00:11:46.394 iodepth=1 00:11:46.394 norandommap=1 00:11:46.394 numjobs=1 00:11:46.394 00:11:46.394 [job0] 00:11:46.394 filename=/dev/nvme0n1 00:11:46.394 [job1] 00:11:46.394 filename=/dev/nvme0n2 00:11:46.394 [job2] 00:11:46.394 filename=/dev/nvme0n3 00:11:46.394 [job3] 00:11:46.394 filename=/dev/nvme0n4 00:11:46.394 Could not set queue depth (nvme0n1) 00:11:46.394 Could not set queue depth (nvme0n2) 00:11:46.394 Could not set queue depth (nvme0n3) 00:11:46.394 Could not set queue depth (nvme0n4) 00:11:46.394 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.394 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.394 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.394 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.394 fio-3.35 00:11:46.394 Starting 4 threads 00:11:49.679 08:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:49.679 fio: pid=66878, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:49.679 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41775104, buflen=4096 00:11:49.679 08:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:49.937 fio: pid=66877, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:49.937 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46104576, buflen=4096 00:11:49.937 08:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:49.937 08:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:50.197 fio: pid=66875, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:50.197 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=59248640, buflen=4096 00:11:50.197 08:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:50.197 08:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:50.458 fio: pid=66876, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:50.458 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=536576, buflen=4096 00:11:50.458 00:11:50.458 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66875: Wed Nov 20 08:45:21 2024 00:11:50.458 read: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(56.5MiB/3531msec) 00:11:50.458 slat (usec): min=8, max=15800, avg=15.90, stdev=217.36 00:11:50.458 clat (usec): min=59, max=7914, avg=226.90, stdev=81.45 00:11:50.458 lat (usec): min=144, max=16052, avg=242.81, stdev=232.22 00:11:50.458 clat percentiles (usec): 00:11:50.458 | 1.00th=[ 149], 5.00th=[ 169], 10.00th=[ 204], 20.00th=[ 215], 00:11:50.458 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:11:50.458 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 258], 00:11:50.458 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 562], 99.95th=[ 1029], 00:11:50.458 | 99.99th=[ 3720] 00:11:50.458 bw ( KiB/s): min=15656, max=17504, per=30.05%, avg=16398.67, stdev=603.55, samples=6 00:11:50.458 iops : min= 3914, max= 4376, avg=4099.67, stdev=150.89, samples=6 00:11:50.458 lat (usec) : 100=0.01%, 250=89.71%, 500=10.15%, 750=0.06%, 1000=0.01% 00:11:50.458 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01% 00:11:50.458 cpu : usr=1.13%, sys=4.99%, ctx=14479, majf=0, minf=1 00:11:50.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.458 issued rwts: total=14466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.458 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66876: Wed Nov 20 08:45:21 2024 00:11:50.458 read: IOPS=4296, BW=16.8MiB/s (17.6MB/s)(64.5MiB/3844msec) 00:11:50.458 slat (usec): min=7, max=10228, avg=14.45, stdev=152.83 00:11:50.458 clat (usec): min=126, max=2678, avg=217.23, stdev=44.06 00:11:50.458 lat (usec): min=139, max=10455, avg=231.68, stdev=158.96 00:11:50.458 clat percentiles (usec): 00:11:50.458 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 153], 20.00th=[ 192], 00:11:50.458 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:11:50.458 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 258], 00:11:50.458 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 404], 99.95th=[ 603], 00:11:50.458 | 99.99th=[ 1893] 00:11:50.458 bw ( KiB/s): min=16248, max=18336, per=30.66%, avg=16731.57, stdev=743.71, samples=7 00:11:50.458 iops : min= 4062, max= 4584, avg=4182.86, stdev=185.92, samples=7 00:11:50.458 lat (usec) : 250=90.07%, 500=9.86%, 750=0.02%, 1000=0.03% 00:11:50.458 lat (msec) : 2=0.01%, 4=0.01% 00:11:50.458 cpu : usr=0.91%, sys=4.79%, ctx=16533, majf=0, minf=1 00:11:50.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.458 issued rwts: total=16516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.458 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66877: Wed Nov 20 08:45:21 2024 00:11:50.458 read: IOPS=3449, BW=13.5MiB/s (14.1MB/s)(44.0MiB/3263msec) 00:11:50.458 slat (usec): min=10, max=7852, avg=16.53, stdev=101.80 00:11:50.458 clat (usec): min=145, max=3609, avg=271.66, stdev=75.25 00:11:50.458 lat (usec): min=159, max=8029, avg=288.19, stdev=126.31 00:11:50.458 clat percentiles (usec): 00:11:50.458 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 182], 20.00th=[ 253], 00:11:50.458 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:11:50.458 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 347], 00:11:50.458 | 99.00th=[ 433], 99.50th=[ 478], 99.90th=[ 1074], 99.95th=[ 1532], 00:11:50.458 | 99.99th=[ 2704] 00:11:50.458 bw ( KiB/s): min=12456, max=13832, per=24.49%, avg=13360.00, stdev=573.55, samples=6 00:11:50.458 iops : min= 3114, max= 3458, avg=3340.00, stdev=143.39, samples=6 00:11:50.458 lat (usec) : 250=17.28%, 500=82.33%, 750=0.26%, 1000=0.02% 00:11:50.458 lat (msec) : 2=0.08%, 4=0.03% 00:11:50.458 cpu : usr=1.35%, sys=4.48%, ctx=11261, majf=0, minf=2 00:11:50.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.458 issued rwts: total=11257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.458 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66878: Wed Nov 20 08:45:21 2024 00:11:50.458 read: IOPS=3420, BW=13.4MiB/s (14.0MB/s)(39.8MiB/2982msec) 00:11:50.458 slat (usec): min=10, max=104, avg=14.38, stdev= 4.00 00:11:50.458 clat (usec): min=147, max=2048, avg=276.31, stdev=40.95 00:11:50.458 lat (usec): min=160, max=2070, avg=290.69, stdev=41.59 00:11:50.458 clat percentiles (usec): 00:11:50.458 | 1.00th=[ 186], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 258], 00:11:50.458 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:11:50.458 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:11:50.458 | 99.00th=[ 379], 99.50th=[ 445], 99.90th=[ 660], 99.95th=[ 775], 00:11:50.458 | 99.99th=[ 1369] 00:11:50.458 bw ( KiB/s): min=12880, max=14048, per=25.05%, avg=13670.40, stdev=454.36, samples=5 00:11:50.458 iops : min= 3220, max= 3512, avg=3417.60, stdev=113.59, samples=5 00:11:50.458 lat (usec) : 250=9.51%, 500=90.13%, 750=0.29%, 1000=0.04% 00:11:50.458 lat (msec) : 2=0.01%, 4=0.01% 00:11:50.458 cpu : usr=1.27%, sys=4.26%, ctx=10204, majf=0, minf=2 00:11:50.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.458 issued rwts: total=10200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.458 00:11:50.458 Run status group 0 (all jobs): 00:11:50.458 READ: bw=53.3MiB/s (55.9MB/s), 13.4MiB/s-16.8MiB/s (14.0MB/s-17.6MB/s), io=205MiB (215MB), run=2982-3844msec 00:11:50.458 00:11:50.458 Disk stats (read/write): 00:11:50.458 nvme0n1: ios=13759/0, merge=0/0, ticks=3029/0, in_queue=3029, util=94.76% 00:11:50.458 nvme0n2: ios=15102/0, merge=0/0, ticks=3206/0, in_queue=3206, util=95.66% 00:11:50.458 nvme0n3: ios=10479/0, merge=0/0, ticks=2952/0, in_queue=2952, util=96.40% 00:11:50.458 nvme0n4: ios=9800/0, merge=0/0, ticks=2754/0, in_queue=2754, util=96.76% 00:11:50.458 08:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:50.458 08:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:50.718 08:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:50.718 08:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:51.285 08:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:51.285 08:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:51.544 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:51.544 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:51.884 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:51.884 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66835 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:52.143 nvmf hotplug test: fio failed as expected 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:52.143 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.402 rmmod nvme_tcp 00:11:52.402 rmmod nvme_fabrics 00:11:52.402 rmmod nvme_keyring 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66450 ']' 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66450 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66450 ']' 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66450 00:11:52.402 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:52.662 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.662 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66450 00:11:52.662 killing process with pid 66450 00:11:52.662 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.662 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.662 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66450' 00:11:52.662 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66450 00:11:52.662 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66450 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.922 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:53.181 00:11:53.181 real 0m21.024s 00:11:53.181 user 1m20.174s 00:11:53.181 sys 0m9.774s 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.181 ************************************ 00:11:53.181 END TEST nvmf_fio_target 00:11:53.181 ************************************ 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:53.181 ************************************ 00:11:53.181 START TEST nvmf_bdevio 00:11:53.181 ************************************ 00:11:53.181 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:53.181 * Looking for test storage... 00:11:53.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:53.181 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:53.181 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:53.181 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:53.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.442 --rc genhtml_branch_coverage=1 00:11:53.442 --rc genhtml_function_coverage=1 00:11:53.442 --rc genhtml_legend=1 00:11:53.442 --rc geninfo_all_blocks=1 00:11:53.442 --rc geninfo_unexecuted_blocks=1 00:11:53.442 00:11:53.442 ' 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:53.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.442 --rc genhtml_branch_coverage=1 00:11:53.442 --rc genhtml_function_coverage=1 00:11:53.442 --rc genhtml_legend=1 00:11:53.442 --rc geninfo_all_blocks=1 00:11:53.442 --rc geninfo_unexecuted_blocks=1 00:11:53.442 00:11:53.442 ' 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:53.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.442 --rc genhtml_branch_coverage=1 00:11:53.442 --rc genhtml_function_coverage=1 00:11:53.442 --rc genhtml_legend=1 00:11:53.442 --rc geninfo_all_blocks=1 00:11:53.442 --rc geninfo_unexecuted_blocks=1 00:11:53.442 00:11:53.442 ' 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:53.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.442 --rc genhtml_branch_coverage=1 00:11:53.442 --rc genhtml_function_coverage=1 00:11:53.442 --rc genhtml_legend=1 00:11:53.442 --rc geninfo_all_blocks=1 00:11:53.442 --rc geninfo_unexecuted_blocks=1 00:11:53.442 00:11:53.442 ' 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.442 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.443 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:53.443 Cannot find device "nvmf_init_br" 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:53.443 Cannot find device "nvmf_init_br2" 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:53.443 Cannot find device "nvmf_tgt_br" 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:53.443 Cannot find device "nvmf_tgt_br2" 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:53.443 Cannot find device "nvmf_init_br" 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:53.443 Cannot find device "nvmf_init_br2" 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:53.443 Cannot find device "nvmf_tgt_br" 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:53.443 Cannot find device "nvmf_tgt_br2" 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:53.443 Cannot find device "nvmf_br" 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:53.443 Cannot find device "nvmf_init_if" 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:53.443 Cannot find device "nvmf_init_if2" 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:53.443 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:53.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:53.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:53.444 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:53.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:53.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:11:53.703 00:11:53.703 --- 10.0.0.3 ping statistics --- 00:11:53.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.703 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:53.703 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:53.703 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:11:53.703 00:11:53.703 --- 10.0.0.4 ping statistics --- 00:11:53.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.703 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:53.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:53.703 00:11:53.703 --- 10.0.0.1 ping statistics --- 00:11:53.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.703 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:53.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:11:53.703 00:11:53.703 --- 10.0.0.2 ping statistics --- 00:11:53.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.703 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67203 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67203 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67203 ']' 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.703 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.703 [2024-11-20 08:45:24.555913] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:53.703 [2024-11-20 08:45:24.556723] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.962 [2024-11-20 08:45:24.701406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.962 [2024-11-20 08:45:24.782049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.962 [2024-11-20 08:45:24.782130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.962 [2024-11-20 08:45:24.782141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.962 [2024-11-20 08:45:24.782150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.962 [2024-11-20 08:45:24.782156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.962 [2024-11-20 08:45:24.783843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:53.962 [2024-11-20 08:45:24.783975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:53.962 [2024-11-20 08:45:24.784126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.962 [2024-11-20 08:45:24.784126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:53.962 [2024-11-20 08:45:24.856012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:54.898 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.898 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:54.898 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.899 [2024-11-20 08:45:25.623971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.899 Malloc0 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.899 [2024-11-20 08:45:25.697742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:54.899 { 00:11:54.899 "params": { 00:11:54.899 "name": "Nvme$subsystem", 00:11:54.899 "trtype": "$TEST_TRANSPORT", 00:11:54.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:54.899 "adrfam": "ipv4", 00:11:54.899 "trsvcid": "$NVMF_PORT", 00:11:54.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:54.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:54.899 "hdgst": ${hdgst:-false}, 00:11:54.899 "ddgst": ${ddgst:-false} 00:11:54.899 }, 00:11:54.899 "method": "bdev_nvme_attach_controller" 00:11:54.899 } 00:11:54.899 EOF 00:11:54.899 )") 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:54.899 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:54.899 "params": { 00:11:54.899 "name": "Nvme1", 00:11:54.899 "trtype": "tcp", 00:11:54.899 "traddr": "10.0.0.3", 00:11:54.899 "adrfam": "ipv4", 00:11:54.899 "trsvcid": "4420", 00:11:54.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:54.899 "hdgst": false, 00:11:54.899 "ddgst": false 00:11:54.899 }, 00:11:54.899 "method": "bdev_nvme_attach_controller" 00:11:54.899 }' 00:11:54.899 [2024-11-20 08:45:25.758719] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:54.899 [2024-11-20 08:45:25.758881] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67239 ] 00:11:55.157 [2024-11-20 08:45:25.912728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.158 [2024-11-20 08:45:26.003619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.158 [2024-11-20 08:45:26.003752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.158 [2024-11-20 08:45:26.003758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.417 [2024-11-20 08:45:26.089102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.417 I/O targets: 00:11:55.417 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:55.417 00:11:55.417 00:11:55.417 CUnit - A unit testing framework for C - Version 2.1-3 00:11:55.417 http://cunit.sourceforge.net/ 00:11:55.417 00:11:55.417 00:11:55.417 Suite: bdevio tests on: Nvme1n1 00:11:55.417 Test: blockdev write read block ...passed 00:11:55.417 Test: blockdev write zeroes read block ...passed 00:11:55.417 Test: blockdev write zeroes read no split ...passed 00:11:55.417 Test: blockdev write zeroes read split ...passed 00:11:55.417 Test: blockdev write zeroes read split partial ...passed 00:11:55.417 Test: blockdev reset ...[2024-11-20 08:45:26.264988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:55.417 [2024-11-20 08:45:26.265140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218b180 (9): Bad file descriptor 00:11:55.417 passed 00:11:55.417 Test: blockdev write read 8 blocks ...[2024-11-20 08:45:26.284548] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:55.417 passed 00:11:55.417 Test: blockdev write read size > 128k ...passed 00:11:55.417 Test: blockdev write read invalid size ...passed 00:11:55.417 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.417 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.417 Test: blockdev write read max offset ...passed 00:11:55.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.417 Test: blockdev writev readv 8 blocks ...passed 00:11:55.417 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.417 Test: blockdev writev readv block ...passed 00:11:55.417 Test: blockdev writev readv size > 128k ...passed 00:11:55.417 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:55.417 Test: blockdev comparev and writev ...[2024-11-20 08:45:26.292247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.417 [2024-11-20 08:45:26.292306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:55.417 [2024-11-20 08:45:26.292327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.417 [2024-11-20 08:45:26.292339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:55.417 passed 00:11:55.417 Test: blockdev nvme passthru rw ...passed 00:11:55.417 Test: blockdev nvme passthru vendor specific ...passed 00:11:55.417 Test: blockdev nvme admin passthru ...[2024-11-20 08:45:26.292716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.417 [2024-11-20 08:45:26.292739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:55.417 [2024-11-20 08:45:26.292757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.417 [2024-11-20 08:45:26.292769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:55.417 [2024-11-20 08:45:26.293059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.417 [2024-11-20 08:45:26.293077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:55.417 [2024-11-20 08:45:26.293094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.417 [2024-11-20 08:45:26.293104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:55.417 [2024-11-20 08:45:26.293392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.417 [2024-11-20 08:45:26.293409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:55.417 [2024-11-20 08:45:26.293426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.417 [2024-11-20 08:45:26.293436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:55.417 [2024-11-20 08:45:26.294236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.417 [2024-11-20 08:45:26.294256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:55.417 [2024-11-20 08:45:26.294370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.417 [2024-11-20 08:45:26.294386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:55.417 [2024-11-20 08:45:26.294502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.417 [2024-11-20 08:45:26.294518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:55.417 [2024-11-20 08:45:26.294618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.417 [2024-11-20 08:45:26.294634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:55.417 passed 00:11:55.417 Test: blockdev copy ...passed 00:11:55.417 00:11:55.417 Run Summary: Type Total Ran Passed Failed Inactive 00:11:55.417 suites 1 1 n/a 0 0 00:11:55.417 tests 23 23 23 0 0 00:11:55.417 asserts 152 152 152 0 n/a 00:11:55.417 00:11:55.417 Elapsed time = 0.163 seconds 00:11:55.676 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.676 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.676 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.676 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.677 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:55.677 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:55.677 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:55.677 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.008 rmmod nvme_tcp 00:11:56.008 rmmod nvme_fabrics 00:11:56.008 rmmod nvme_keyring 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67203 ']' 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67203 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67203 ']' 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67203 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67203 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:56.008 killing process with pid 67203 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67203' 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67203 00:11:56.008 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67203 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:56.267 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:56.527 00:11:56.527 real 0m3.371s 00:11:56.527 user 0m10.645s 00:11:56.527 sys 0m0.986s 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.527 ************************************ 00:11:56.527 END TEST nvmf_bdevio 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:56.527 ************************************ 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:56.527 00:11:56.527 real 2m40.634s 00:11:56.527 user 7m6.009s 00:11:56.527 sys 0m54.340s 00:11:56.527 ************************************ 00:11:56.527 END TEST nvmf_target_core 00:11:56.527 ************************************ 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:56.527 08:45:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:56.527 08:45:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.527 08:45:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.527 08:45:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:56.527 ************************************ 00:11:56.527 START TEST nvmf_target_extra 00:11:56.527 ************************************ 00:11:56.527 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:56.786 * Looking for test storage... 00:11:56.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.786 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:56.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.787 --rc genhtml_branch_coverage=1 00:11:56.787 --rc genhtml_function_coverage=1 00:11:56.787 --rc genhtml_legend=1 00:11:56.787 --rc geninfo_all_blocks=1 00:11:56.787 --rc geninfo_unexecuted_blocks=1 00:11:56.787 00:11:56.787 ' 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:56.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.787 --rc genhtml_branch_coverage=1 00:11:56.787 --rc genhtml_function_coverage=1 00:11:56.787 --rc genhtml_legend=1 00:11:56.787 --rc geninfo_all_blocks=1 00:11:56.787 --rc geninfo_unexecuted_blocks=1 00:11:56.787 00:11:56.787 ' 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:56.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.787 --rc genhtml_branch_coverage=1 00:11:56.787 --rc genhtml_function_coverage=1 00:11:56.787 --rc genhtml_legend=1 00:11:56.787 --rc geninfo_all_blocks=1 00:11:56.787 --rc geninfo_unexecuted_blocks=1 00:11:56.787 00:11:56.787 ' 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:56.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.787 --rc genhtml_branch_coverage=1 00:11:56.787 --rc genhtml_function_coverage=1 00:11:56.787 --rc genhtml_legend=1 00:11:56.787 --rc geninfo_all_blocks=1 00:11:56.787 --rc geninfo_unexecuted_blocks=1 00:11:56.787 00:11:56.787 ' 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.787 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.787 ************************************ 00:11:56.787 START TEST nvmf_auth_target 00:11:56.787 ************************************ 00:11:56.787 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:57.047 * Looking for test storage... 00:11:57.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:57.047 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:57.047 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:57.047 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:57.047 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:57.047 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.048 --rc genhtml_branch_coverage=1 00:11:57.048 --rc genhtml_function_coverage=1 00:11:57.048 --rc genhtml_legend=1 00:11:57.048 --rc geninfo_all_blocks=1 00:11:57.048 --rc geninfo_unexecuted_blocks=1 00:11:57.048 00:11:57.048 ' 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.048 --rc genhtml_branch_coverage=1 00:11:57.048 --rc genhtml_function_coverage=1 00:11:57.048 --rc genhtml_legend=1 00:11:57.048 --rc geninfo_all_blocks=1 00:11:57.048 --rc geninfo_unexecuted_blocks=1 00:11:57.048 00:11:57.048 ' 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.048 --rc genhtml_branch_coverage=1 00:11:57.048 --rc genhtml_function_coverage=1 00:11:57.048 --rc genhtml_legend=1 00:11:57.048 --rc geninfo_all_blocks=1 00:11:57.048 --rc geninfo_unexecuted_blocks=1 00:11:57.048 00:11:57.048 ' 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.048 --rc genhtml_branch_coverage=1 00:11:57.048 --rc genhtml_function_coverage=1 00:11:57.048 --rc genhtml_legend=1 00:11:57.048 --rc geninfo_all_blocks=1 00:11:57.048 --rc geninfo_unexecuted_blocks=1 00:11:57.048 00:11:57.048 ' 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:57.048 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:11:57.048 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:57.049 Cannot find device "nvmf_init_br" 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:57.049 Cannot find device "nvmf_init_br2" 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:57.049 Cannot find device "nvmf_tgt_br" 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:57.049 Cannot find device "nvmf_tgt_br2" 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:57.049 Cannot find device "nvmf_init_br" 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:57.049 Cannot find device "nvmf_init_br2" 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:57.049 Cannot find device "nvmf_tgt_br" 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:57.049 Cannot find device "nvmf_tgt_br2" 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:57.049 Cannot find device "nvmf_br" 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:57.049 Cannot find device "nvmf_init_if" 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:57.049 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:57.308 Cannot find device "nvmf_init_if2" 00:11:57.308 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:57.308 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:57.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.308 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:57.308 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:57.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.308 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:57.308 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:57.308 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:57.309 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:57.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:57.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:11:57.568 00:11:57.568 --- 10.0.0.3 ping statistics --- 00:11:57.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.568 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:57.568 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:57.568 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:11:57.568 00:11:57.568 --- 10.0.0.4 ping statistics --- 00:11:57.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.568 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:57.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:11:57.568 00:11:57.568 --- 10.0.0.1 ping statistics --- 00:11:57.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.568 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:57.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:11:57.568 00:11:57.568 --- 10.0.0.2 ping statistics --- 00:11:57.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.568 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67527 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67527 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67527 ']' 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.568 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.569 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.569 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.569 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67551 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c85da22ab0b47462a66dd4470a7d8d1eb5d523adbece3d67 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kxU 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c85da22ab0b47462a66dd4470a7d8d1eb5d523adbece3d67 0 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c85da22ab0b47462a66dd4470a7d8d1eb5d523adbece3d67 0 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c85da22ab0b47462a66dd4470a7d8d1eb5d523adbece3d67 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kxU 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kxU 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.kxU 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=42c9895adf17889ef436b2eda45701d68fc506d96501ddd9891f9c8000d6f7f0 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Z5Z 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 42c9895adf17889ef436b2eda45701d68fc506d96501ddd9891f9c8000d6f7f0 3 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 42c9895adf17889ef436b2eda45701d68fc506d96501ddd9891f9c8000d6f7f0 3 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=42c9895adf17889ef436b2eda45701d68fc506d96501ddd9891f9c8000d6f7f0 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:58.136 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:58.136 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Z5Z 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Z5Z 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Z5Z 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=defa7320779497c9482eeaaf97f1f711 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BfY 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key defa7320779497c9482eeaaf97f1f711 1 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 defa7320779497c9482eeaaf97f1f711 1 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=defa7320779497c9482eeaaf97f1f711 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BfY 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BfY 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.BfY 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f7173f8410dd3a377b5b023ae4ef69f967614edb98161d90 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7Np 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f7173f8410dd3a377b5b023ae4ef69f967614edb98161d90 2 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f7173f8410dd3a377b5b023ae4ef69f967614edb98161d90 2 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f7173f8410dd3a377b5b023ae4ef69f967614edb98161d90 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7Np 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7Np 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.7Np 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d56e1f4496eefa5fe6da500dbec8da254883cf64abef7617 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:58.396 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gVy 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d56e1f4496eefa5fe6da500dbec8da254883cf64abef7617 2 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d56e1f4496eefa5fe6da500dbec8da254883cf64abef7617 2 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d56e1f4496eefa5fe6da500dbec8da254883cf64abef7617 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gVy 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gVy 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.gVy 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3b8d29b4c88e7449b873f96844a56801 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.MTh 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3b8d29b4c88e7449b873f96844a56801 1 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3b8d29b4c88e7449b873f96844a56801 1 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3b8d29b4c88e7449b873f96844a56801 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:58.397 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.MTh 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.MTh 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.MTh 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=258cf0a4d7b9e7cf2bdc413e0bb76c840bac6d1e8658b826a588e33e8dc5e616 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ZBs 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 258cf0a4d7b9e7cf2bdc413e0bb76c840bac6d1e8658b826a588e33e8dc5e616 3 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 258cf0a4d7b9e7cf2bdc413e0bb76c840bac6d1e8658b826a588e33e8dc5e616 3 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=258cf0a4d7b9e7cf2bdc413e0bb76c840bac6d1e8658b826a588e33e8dc5e616 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ZBs 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ZBs 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ZBs 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67527 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67527 ']' 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.655 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.914 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.914 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:58.914 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67551 /var/tmp/host.sock 00:11:58.914 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67551 ']' 00:11:58.914 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:58.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:58.914 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.914 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:58.914 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.914 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kxU 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.kxU 00:11:59.483 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.kxU 00:11:59.743 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Z5Z ]] 00:11:59.743 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Z5Z 00:11:59.743 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.743 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.743 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.743 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Z5Z 00:11:59.743 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Z5Z 00:12:00.002 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:00.002 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BfY 00:12:00.002 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.002 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.002 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.002 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BfY 00:12:00.002 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BfY 00:12:00.261 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.7Np ]] 00:12:00.261 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Np 00:12:00.261 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.261 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.261 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.261 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Np 00:12:00.261 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Np 00:12:00.830 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:00.830 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gVy 00:12:00.830 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.830 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.830 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.830 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.gVy 00:12:00.830 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.gVy 00:12:01.089 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.MTh ]] 00:12:01.089 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MTh 00:12:01.089 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.089 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.089 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.089 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MTh 00:12:01.089 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MTh 00:12:01.348 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:01.348 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ZBs 00:12:01.348 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.348 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.348 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.348 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ZBs 00:12:01.348 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ZBs 00:12:01.607 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:01.608 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:01.608 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:01.608 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.608 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:01.608 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:01.866 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:01.866 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.866 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:01.867 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:01.867 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:01.867 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.867 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.867 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.867 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.867 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.867 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.867 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.867 08:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.433 00:12:02.433 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.433 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.433 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.692 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.692 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.692 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.692 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.692 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.692 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.692 { 00:12:02.692 "cntlid": 1, 00:12:02.692 "qid": 0, 00:12:02.692 "state": "enabled", 00:12:02.692 "thread": "nvmf_tgt_poll_group_000", 00:12:02.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:02.692 "listen_address": { 00:12:02.692 "trtype": "TCP", 00:12:02.692 "adrfam": "IPv4", 00:12:02.692 "traddr": "10.0.0.3", 00:12:02.692 "trsvcid": "4420" 00:12:02.692 }, 00:12:02.692 "peer_address": { 00:12:02.692 "trtype": "TCP", 00:12:02.692 "adrfam": "IPv4", 00:12:02.692 "traddr": "10.0.0.1", 00:12:02.692 "trsvcid": "58678" 00:12:02.692 }, 00:12:02.692 "auth": { 00:12:02.692 "state": "completed", 00:12:02.692 "digest": "sha256", 00:12:02.692 "dhgroup": "null" 00:12:02.692 } 00:12:02.692 } 00:12:02.692 ]' 00:12:02.692 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.692 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:02.692 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.692 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:02.692 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.951 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.951 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.951 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.209 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:03.209 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.480 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.480 00:12:08.480 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.480 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.480 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.480 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.480 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.480 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.480 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.480 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.480 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.480 { 00:12:08.480 "cntlid": 3, 00:12:08.480 "qid": 0, 00:12:08.480 "state": "enabled", 00:12:08.480 "thread": "nvmf_tgt_poll_group_000", 00:12:08.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:08.480 "listen_address": { 00:12:08.480 "trtype": "TCP", 00:12:08.480 "adrfam": "IPv4", 00:12:08.480 "traddr": "10.0.0.3", 00:12:08.480 "trsvcid": "4420" 00:12:08.480 }, 00:12:08.480 "peer_address": { 00:12:08.480 "trtype": "TCP", 00:12:08.480 "adrfam": "IPv4", 00:12:08.480 "traddr": "10.0.0.1", 00:12:08.480 "trsvcid": "48026" 00:12:08.480 }, 00:12:08.480 "auth": { 00:12:08.480 "state": "completed", 00:12:08.480 "digest": "sha256", 00:12:08.480 "dhgroup": "null" 00:12:08.480 } 00:12:08.480 } 00:12:08.480 ]' 00:12:08.480 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.739 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:08.739 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.739 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:08.739 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.739 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.739 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.739 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.306 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:09.306 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:09.875 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.875 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:09.875 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.875 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.875 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.875 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.875 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:09.875 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:10.134 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:10.134 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.134 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:10.134 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:10.134 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:10.134 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.134 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.134 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.134 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.134 08:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.134 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.134 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.134 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.701 00:12:10.701 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.701 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.701 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.960 { 00:12:10.960 "cntlid": 5, 00:12:10.960 "qid": 0, 00:12:10.960 "state": "enabled", 00:12:10.960 "thread": "nvmf_tgt_poll_group_000", 00:12:10.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:10.960 "listen_address": { 00:12:10.960 "trtype": "TCP", 00:12:10.960 "adrfam": "IPv4", 00:12:10.960 "traddr": "10.0.0.3", 00:12:10.960 "trsvcid": "4420" 00:12:10.960 }, 00:12:10.960 "peer_address": { 00:12:10.960 "trtype": "TCP", 00:12:10.960 "adrfam": "IPv4", 00:12:10.960 "traddr": "10.0.0.1", 00:12:10.960 "trsvcid": "48064" 00:12:10.960 }, 00:12:10.960 "auth": { 00:12:10.960 "state": "completed", 00:12:10.960 "digest": "sha256", 00:12:10.960 "dhgroup": "null" 00:12:10.960 } 00:12:10.960 } 00:12:10.960 ]' 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.960 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.219 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:11.219 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:12.156 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.156 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:12.156 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.156 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.156 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.156 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.156 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:12.156 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:12.415 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:12.674 00:12:12.674 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.674 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.674 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.933 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.933 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.933 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.933 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.933 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.933 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.933 { 00:12:12.933 "cntlid": 7, 00:12:12.933 "qid": 0, 00:12:12.933 "state": "enabled", 00:12:12.933 "thread": "nvmf_tgt_poll_group_000", 00:12:12.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:12.933 "listen_address": { 00:12:12.933 "trtype": "TCP", 00:12:12.933 "adrfam": "IPv4", 00:12:12.933 "traddr": "10.0.0.3", 00:12:12.933 "trsvcid": "4420" 00:12:12.933 }, 00:12:12.933 "peer_address": { 00:12:12.933 "trtype": "TCP", 00:12:12.933 "adrfam": "IPv4", 00:12:12.933 "traddr": "10.0.0.1", 00:12:12.933 "trsvcid": "48090" 00:12:12.933 }, 00:12:12.933 "auth": { 00:12:12.933 "state": "completed", 00:12:12.933 "digest": "sha256", 00:12:12.933 "dhgroup": "null" 00:12:12.933 } 00:12:12.933 } 00:12:12.933 ]' 00:12:12.933 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.191 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:13.191 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.191 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:13.191 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.191 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.191 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.191 08:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.450 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:12:13.450 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:12:14.386 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.386 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:14.386 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.386 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.386 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.386 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:14.386 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.387 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:14.387 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.645 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.904 00:12:14.904 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.904 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.904 08:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.164 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.164 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.164 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.164 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.164 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.164 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.164 { 00:12:15.164 "cntlid": 9, 00:12:15.164 "qid": 0, 00:12:15.164 "state": "enabled", 00:12:15.164 "thread": "nvmf_tgt_poll_group_000", 00:12:15.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:15.164 "listen_address": { 00:12:15.164 "trtype": "TCP", 00:12:15.164 "adrfam": "IPv4", 00:12:15.164 "traddr": "10.0.0.3", 00:12:15.164 "trsvcid": "4420" 00:12:15.164 }, 00:12:15.164 "peer_address": { 00:12:15.164 "trtype": "TCP", 00:12:15.164 "adrfam": "IPv4", 00:12:15.164 "traddr": "10.0.0.1", 00:12:15.164 "trsvcid": "48108" 00:12:15.164 }, 00:12:15.164 "auth": { 00:12:15.164 "state": "completed", 00:12:15.164 "digest": "sha256", 00:12:15.164 "dhgroup": "ffdhe2048" 00:12:15.164 } 00:12:15.164 } 00:12:15.164 ]' 00:12:15.164 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.164 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:15.164 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.423 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:15.423 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.423 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.423 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.423 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.682 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:15.682 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:16.250 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.250 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:16.250 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.250 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.250 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.250 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.250 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:16.250 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.818 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.078 00:12:17.078 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.078 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.078 08:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.338 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.338 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.338 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.338 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.338 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.338 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.338 { 00:12:17.338 "cntlid": 11, 00:12:17.338 "qid": 0, 00:12:17.338 "state": "enabled", 00:12:17.338 "thread": "nvmf_tgt_poll_group_000", 00:12:17.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:17.338 "listen_address": { 00:12:17.338 "trtype": "TCP", 00:12:17.338 "adrfam": "IPv4", 00:12:17.338 "traddr": "10.0.0.3", 00:12:17.338 "trsvcid": "4420" 00:12:17.338 }, 00:12:17.338 "peer_address": { 00:12:17.338 "trtype": "TCP", 00:12:17.338 "adrfam": "IPv4", 00:12:17.338 "traddr": "10.0.0.1", 00:12:17.338 "trsvcid": "38694" 00:12:17.338 }, 00:12:17.338 "auth": { 00:12:17.338 "state": "completed", 00:12:17.338 "digest": "sha256", 00:12:17.338 "dhgroup": "ffdhe2048" 00:12:17.338 } 00:12:17.338 } 00:12:17.338 ]' 00:12:17.338 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.338 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:17.338 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.338 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:17.338 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.597 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.597 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.597 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.857 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:17.857 08:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:18.425 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.425 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:18.425 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.425 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.425 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.425 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.425 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:18.425 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.684 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.252 00:12:19.252 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.252 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.252 08:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.512 { 00:12:19.512 "cntlid": 13, 00:12:19.512 "qid": 0, 00:12:19.512 "state": "enabled", 00:12:19.512 "thread": "nvmf_tgt_poll_group_000", 00:12:19.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:19.512 "listen_address": { 00:12:19.512 "trtype": "TCP", 00:12:19.512 "adrfam": "IPv4", 00:12:19.512 "traddr": "10.0.0.3", 00:12:19.512 "trsvcid": "4420" 00:12:19.512 }, 00:12:19.512 "peer_address": { 00:12:19.512 "trtype": "TCP", 00:12:19.512 "adrfam": "IPv4", 00:12:19.512 "traddr": "10.0.0.1", 00:12:19.512 "trsvcid": "38724" 00:12:19.512 }, 00:12:19.512 "auth": { 00:12:19.512 "state": "completed", 00:12:19.512 "digest": "sha256", 00:12:19.512 "dhgroup": "ffdhe2048" 00:12:19.512 } 00:12:19.512 } 00:12:19.512 ]' 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.512 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.080 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:20.080 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:20.648 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.648 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:20.648 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.648 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.648 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.648 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.648 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:20.648 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:20.906 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:20.906 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.906 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:20.906 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:20.906 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:20.906 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.906 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:12:20.906 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.906 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.906 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.906 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:20.907 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.907 08:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.165 00:12:21.165 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.165 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.165 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.733 { 00:12:21.733 "cntlid": 15, 00:12:21.733 "qid": 0, 00:12:21.733 "state": "enabled", 00:12:21.733 "thread": "nvmf_tgt_poll_group_000", 00:12:21.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:21.733 "listen_address": { 00:12:21.733 "trtype": "TCP", 00:12:21.733 "adrfam": "IPv4", 00:12:21.733 "traddr": "10.0.0.3", 00:12:21.733 "trsvcid": "4420" 00:12:21.733 }, 00:12:21.733 "peer_address": { 00:12:21.733 "trtype": "TCP", 00:12:21.733 "adrfam": "IPv4", 00:12:21.733 "traddr": "10.0.0.1", 00:12:21.733 "trsvcid": "38746" 00:12:21.733 }, 00:12:21.733 "auth": { 00:12:21.733 "state": "completed", 00:12:21.733 "digest": "sha256", 00:12:21.733 "dhgroup": "ffdhe2048" 00:12:21.733 } 00:12:21.733 } 00:12:21.733 ]' 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.733 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.992 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:12:21.992 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:12:22.928 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.928 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:22.928 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.928 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.929 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.929 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:22.929 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.929 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:22.929 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.187 08:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.445 00:12:23.445 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.445 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.445 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.012 { 00:12:24.012 "cntlid": 17, 00:12:24.012 "qid": 0, 00:12:24.012 "state": "enabled", 00:12:24.012 "thread": "nvmf_tgt_poll_group_000", 00:12:24.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:24.012 "listen_address": { 00:12:24.012 "trtype": "TCP", 00:12:24.012 "adrfam": "IPv4", 00:12:24.012 "traddr": "10.0.0.3", 00:12:24.012 "trsvcid": "4420" 00:12:24.012 }, 00:12:24.012 "peer_address": { 00:12:24.012 "trtype": "TCP", 00:12:24.012 "adrfam": "IPv4", 00:12:24.012 "traddr": "10.0.0.1", 00:12:24.012 "trsvcid": "38776" 00:12:24.012 }, 00:12:24.012 "auth": { 00:12:24.012 "state": "completed", 00:12:24.012 "digest": "sha256", 00:12:24.012 "dhgroup": "ffdhe3072" 00:12:24.012 } 00:12:24.012 } 00:12:24.012 ]' 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.012 08:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.271 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:24.271 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:25.207 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.207 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:25.207 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.207 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.207 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.207 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.207 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:25.207 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.466 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.467 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.725 00:12:25.725 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.725 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.725 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.293 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.293 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.293 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.293 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.293 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.293 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.293 { 00:12:26.293 "cntlid": 19, 00:12:26.293 "qid": 0, 00:12:26.293 "state": "enabled", 00:12:26.293 "thread": "nvmf_tgt_poll_group_000", 00:12:26.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:26.293 "listen_address": { 00:12:26.293 "trtype": "TCP", 00:12:26.293 "adrfam": "IPv4", 00:12:26.293 "traddr": "10.0.0.3", 00:12:26.293 "trsvcid": "4420" 00:12:26.293 }, 00:12:26.293 "peer_address": { 00:12:26.293 "trtype": "TCP", 00:12:26.293 "adrfam": "IPv4", 00:12:26.294 "traddr": "10.0.0.1", 00:12:26.294 "trsvcid": "38802" 00:12:26.294 }, 00:12:26.294 "auth": { 00:12:26.294 "state": "completed", 00:12:26.294 "digest": "sha256", 00:12:26.294 "dhgroup": "ffdhe3072" 00:12:26.294 } 00:12:26.294 } 00:12:26.294 ]' 00:12:26.294 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.294 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:26.294 08:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.294 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:26.294 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.294 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.294 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.294 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.553 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:26.553 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:27.527 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.528 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:27.528 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.528 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.528 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.528 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.528 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:27.528 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.788 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.046 00:12:28.306 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.306 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.306 08:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.565 { 00:12:28.565 "cntlid": 21, 00:12:28.565 "qid": 0, 00:12:28.565 "state": "enabled", 00:12:28.565 "thread": "nvmf_tgt_poll_group_000", 00:12:28.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:28.565 "listen_address": { 00:12:28.565 "trtype": "TCP", 00:12:28.565 "adrfam": "IPv4", 00:12:28.565 "traddr": "10.0.0.3", 00:12:28.565 "trsvcid": "4420" 00:12:28.565 }, 00:12:28.565 "peer_address": { 00:12:28.565 "trtype": "TCP", 00:12:28.565 "adrfam": "IPv4", 00:12:28.565 "traddr": "10.0.0.1", 00:12:28.565 "trsvcid": "60296" 00:12:28.565 }, 00:12:28.565 "auth": { 00:12:28.565 "state": "completed", 00:12:28.565 "digest": "sha256", 00:12:28.565 "dhgroup": "ffdhe3072" 00:12:28.565 } 00:12:28.565 } 00:12:28.565 ]' 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.565 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.132 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:29.132 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:29.700 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.700 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:29.700 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.700 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.700 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.700 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.700 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:29.700 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.268 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.527 00:12:30.527 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.527 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.527 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.786 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.786 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.786 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.786 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.786 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.786 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.786 { 00:12:30.786 "cntlid": 23, 00:12:30.786 "qid": 0, 00:12:30.786 "state": "enabled", 00:12:30.786 "thread": "nvmf_tgt_poll_group_000", 00:12:30.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:30.786 "listen_address": { 00:12:30.786 "trtype": "TCP", 00:12:30.786 "adrfam": "IPv4", 00:12:30.786 "traddr": "10.0.0.3", 00:12:30.786 "trsvcid": "4420" 00:12:30.786 }, 00:12:30.786 "peer_address": { 00:12:30.786 "trtype": "TCP", 00:12:30.786 "adrfam": "IPv4", 00:12:30.786 "traddr": "10.0.0.1", 00:12:30.786 "trsvcid": "60336" 00:12:30.786 }, 00:12:30.786 "auth": { 00:12:30.786 "state": "completed", 00:12:30.786 "digest": "sha256", 00:12:30.786 "dhgroup": "ffdhe3072" 00:12:30.786 } 00:12:30.786 } 00:12:30.786 ]' 00:12:30.786 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.786 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.786 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.045 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:31.045 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.045 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.045 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.045 08:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.349 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:12:31.349 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:12:31.918 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.919 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:31.919 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.919 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.919 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.919 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:31.919 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.919 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:31.919 08:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.487 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.745 00:12:32.745 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.745 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.745 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.312 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.312 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.312 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.312 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.312 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.312 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.312 { 00:12:33.312 "cntlid": 25, 00:12:33.312 "qid": 0, 00:12:33.312 "state": "enabled", 00:12:33.312 "thread": "nvmf_tgt_poll_group_000", 00:12:33.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:33.312 "listen_address": { 00:12:33.312 "trtype": "TCP", 00:12:33.312 "adrfam": "IPv4", 00:12:33.312 "traddr": "10.0.0.3", 00:12:33.312 "trsvcid": "4420" 00:12:33.312 }, 00:12:33.312 "peer_address": { 00:12:33.312 "trtype": "TCP", 00:12:33.312 "adrfam": "IPv4", 00:12:33.312 "traddr": "10.0.0.1", 00:12:33.312 "trsvcid": "60358" 00:12:33.312 }, 00:12:33.312 "auth": { 00:12:33.312 "state": "completed", 00:12:33.312 "digest": "sha256", 00:12:33.312 "dhgroup": "ffdhe4096" 00:12:33.312 } 00:12:33.312 } 00:12:33.312 ]' 00:12:33.312 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.312 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:33.312 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.312 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:33.312 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.312 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.312 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.312 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.569 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:33.569 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:34.502 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.502 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:34.502 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.502 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.502 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.502 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.502 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:34.502 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.759 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.018 00:12:35.018 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.018 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.018 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.595 { 00:12:35.595 "cntlid": 27, 00:12:35.595 "qid": 0, 00:12:35.595 "state": "enabled", 00:12:35.595 "thread": "nvmf_tgt_poll_group_000", 00:12:35.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:35.595 "listen_address": { 00:12:35.595 "trtype": "TCP", 00:12:35.595 "adrfam": "IPv4", 00:12:35.595 "traddr": "10.0.0.3", 00:12:35.595 "trsvcid": "4420" 00:12:35.595 }, 00:12:35.595 "peer_address": { 00:12:35.595 "trtype": "TCP", 00:12:35.595 "adrfam": "IPv4", 00:12:35.595 "traddr": "10.0.0.1", 00:12:35.595 "trsvcid": "60392" 00:12:35.595 }, 00:12:35.595 "auth": { 00:12:35.595 "state": "completed", 00:12:35.595 "digest": "sha256", 00:12:35.595 "dhgroup": "ffdhe4096" 00:12:35.595 } 00:12:35.595 } 00:12:35.595 ]' 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.595 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.853 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:35.853 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:36.788 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.788 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:36.788 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.788 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.788 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.788 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.788 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:36.788 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.045 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.046 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.304 00:12:37.304 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.304 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.304 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.562 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.562 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.562 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.562 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.820 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.820 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.820 { 00:12:37.820 "cntlid": 29, 00:12:37.820 "qid": 0, 00:12:37.820 "state": "enabled", 00:12:37.820 "thread": "nvmf_tgt_poll_group_000", 00:12:37.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:37.820 "listen_address": { 00:12:37.820 "trtype": "TCP", 00:12:37.820 "adrfam": "IPv4", 00:12:37.820 "traddr": "10.0.0.3", 00:12:37.820 "trsvcid": "4420" 00:12:37.820 }, 00:12:37.820 "peer_address": { 00:12:37.820 "trtype": "TCP", 00:12:37.820 "adrfam": "IPv4", 00:12:37.820 "traddr": "10.0.0.1", 00:12:37.820 "trsvcid": "34936" 00:12:37.820 }, 00:12:37.820 "auth": { 00:12:37.820 "state": "completed", 00:12:37.820 "digest": "sha256", 00:12:37.820 "dhgroup": "ffdhe4096" 00:12:37.820 } 00:12:37.820 } 00:12:37.820 ]' 00:12:37.820 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.820 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:37.820 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.820 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:37.820 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.820 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.820 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.820 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.079 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:38.079 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:12:39.013 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.014 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.272 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.272 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:39.272 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.272 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.530 00:12:39.530 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.530 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.530 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.096 { 00:12:40.096 "cntlid": 31, 00:12:40.096 "qid": 0, 00:12:40.096 "state": "enabled", 00:12:40.096 "thread": "nvmf_tgt_poll_group_000", 00:12:40.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:40.096 "listen_address": { 00:12:40.096 "trtype": "TCP", 00:12:40.096 "adrfam": "IPv4", 00:12:40.096 "traddr": "10.0.0.3", 00:12:40.096 "trsvcid": "4420" 00:12:40.096 }, 00:12:40.096 "peer_address": { 00:12:40.096 "trtype": "TCP", 00:12:40.096 "adrfam": "IPv4", 00:12:40.096 "traddr": "10.0.0.1", 00:12:40.096 "trsvcid": "34970" 00:12:40.096 }, 00:12:40.096 "auth": { 00:12:40.096 "state": "completed", 00:12:40.096 "digest": "sha256", 00:12:40.096 "dhgroup": "ffdhe4096" 00:12:40.096 } 00:12:40.096 } 00:12:40.096 ]' 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.096 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.355 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:12:40.355 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:12:40.922 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.922 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:40.922 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.922 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.922 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.922 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:40.922 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.922 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:40.922 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:41.180 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:41.180 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.180 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:41.180 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:41.180 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:41.180 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.180 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.180 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.180 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.437 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.437 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.437 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.437 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.696 00:12:41.696 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.696 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.696 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.262 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.262 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.262 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.262 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.262 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.262 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.262 { 00:12:42.262 "cntlid": 33, 00:12:42.262 "qid": 0, 00:12:42.262 "state": "enabled", 00:12:42.262 "thread": "nvmf_tgt_poll_group_000", 00:12:42.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:42.262 "listen_address": { 00:12:42.262 "trtype": "TCP", 00:12:42.262 "adrfam": "IPv4", 00:12:42.262 "traddr": "10.0.0.3", 00:12:42.262 "trsvcid": "4420" 00:12:42.262 }, 00:12:42.262 "peer_address": { 00:12:42.262 "trtype": "TCP", 00:12:42.262 "adrfam": "IPv4", 00:12:42.262 "traddr": "10.0.0.1", 00:12:42.262 "trsvcid": "34992" 00:12:42.262 }, 00:12:42.262 "auth": { 00:12:42.262 "state": "completed", 00:12:42.262 "digest": "sha256", 00:12:42.262 "dhgroup": "ffdhe6144" 00:12:42.262 } 00:12:42.262 } 00:12:42.262 ]' 00:12:42.262 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.262 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.262 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.262 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:42.262 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.262 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.262 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.262 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.521 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:42.521 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:43.455 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.455 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:43.455 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.455 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.455 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.455 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.455 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:43.455 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.714 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.280 00:12:44.280 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.280 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.281 08:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.538 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.538 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.538 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.538 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.538 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.538 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.538 { 00:12:44.538 "cntlid": 35, 00:12:44.538 "qid": 0, 00:12:44.539 "state": "enabled", 00:12:44.539 "thread": "nvmf_tgt_poll_group_000", 00:12:44.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:44.539 "listen_address": { 00:12:44.539 "trtype": "TCP", 00:12:44.539 "adrfam": "IPv4", 00:12:44.539 "traddr": "10.0.0.3", 00:12:44.539 "trsvcid": "4420" 00:12:44.539 }, 00:12:44.539 "peer_address": { 00:12:44.539 "trtype": "TCP", 00:12:44.539 "adrfam": "IPv4", 00:12:44.539 "traddr": "10.0.0.1", 00:12:44.539 "trsvcid": "35006" 00:12:44.539 }, 00:12:44.539 "auth": { 00:12:44.539 "state": "completed", 00:12:44.539 "digest": "sha256", 00:12:44.539 "dhgroup": "ffdhe6144" 00:12:44.539 } 00:12:44.539 } 00:12:44.539 ]' 00:12:44.539 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.539 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:44.539 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.539 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:44.539 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.539 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.539 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.539 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.106 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:45.106 08:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:45.672 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.672 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:45.672 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.672 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.672 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.672 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.672 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:45.672 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:45.930 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:45.930 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.930 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:45.930 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:45.930 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:45.931 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.931 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.931 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.931 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.931 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.931 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.931 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.931 08:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.497 00:12:46.497 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.497 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.497 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.754 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.754 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.754 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.754 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.754 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.754 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.754 { 00:12:46.754 "cntlid": 37, 00:12:46.754 "qid": 0, 00:12:46.754 "state": "enabled", 00:12:46.754 "thread": "nvmf_tgt_poll_group_000", 00:12:46.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:46.754 "listen_address": { 00:12:46.754 "trtype": "TCP", 00:12:46.754 "adrfam": "IPv4", 00:12:46.754 "traddr": "10.0.0.3", 00:12:46.754 "trsvcid": "4420" 00:12:46.754 }, 00:12:46.754 "peer_address": { 00:12:46.754 "trtype": "TCP", 00:12:46.754 "adrfam": "IPv4", 00:12:46.754 "traddr": "10.0.0.1", 00:12:46.754 "trsvcid": "40530" 00:12:46.754 }, 00:12:46.754 "auth": { 00:12:46.754 "state": "completed", 00:12:46.754 "digest": "sha256", 00:12:46.754 "dhgroup": "ffdhe6144" 00:12:46.754 } 00:12:46.754 } 00:12:46.754 ]' 00:12:46.754 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.012 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:47.012 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.012 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:47.012 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.012 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.012 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.012 08:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.270 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:47.270 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:47.838 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.838 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:47.838 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.838 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.096 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.096 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.096 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:48.096 08:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:48.354 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:48.354 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.354 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:48.354 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:48.354 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:48.354 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.354 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:12:48.355 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.355 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.355 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.355 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:48.355 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.355 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.921 00:12:48.921 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.921 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.921 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.180 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.180 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.180 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.180 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.180 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.180 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.180 { 00:12:49.180 "cntlid": 39, 00:12:49.180 "qid": 0, 00:12:49.180 "state": "enabled", 00:12:49.180 "thread": "nvmf_tgt_poll_group_000", 00:12:49.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:49.180 "listen_address": { 00:12:49.180 "trtype": "TCP", 00:12:49.180 "adrfam": "IPv4", 00:12:49.180 "traddr": "10.0.0.3", 00:12:49.180 "trsvcid": "4420" 00:12:49.180 }, 00:12:49.180 "peer_address": { 00:12:49.180 "trtype": "TCP", 00:12:49.180 "adrfam": "IPv4", 00:12:49.180 "traddr": "10.0.0.1", 00:12:49.180 "trsvcid": "40560" 00:12:49.180 }, 00:12:49.180 "auth": { 00:12:49.180 "state": "completed", 00:12:49.180 "digest": "sha256", 00:12:49.180 "dhgroup": "ffdhe6144" 00:12:49.180 } 00:12:49.180 } 00:12:49.180 ]' 00:12:49.180 08:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.180 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:49.180 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.180 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:49.180 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.438 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.438 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.438 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.704 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:12:49.704 08:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:12:50.270 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.270 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:50.270 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.270 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.270 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.270 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:50.270 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.270 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:50.270 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.530 08:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.098 00:12:51.357 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.357 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.357 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.734 { 00:12:51.734 "cntlid": 41, 00:12:51.734 "qid": 0, 00:12:51.734 "state": "enabled", 00:12:51.734 "thread": "nvmf_tgt_poll_group_000", 00:12:51.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:51.734 "listen_address": { 00:12:51.734 "trtype": "TCP", 00:12:51.734 "adrfam": "IPv4", 00:12:51.734 "traddr": "10.0.0.3", 00:12:51.734 "trsvcid": "4420" 00:12:51.734 }, 00:12:51.734 "peer_address": { 00:12:51.734 "trtype": "TCP", 00:12:51.734 "adrfam": "IPv4", 00:12:51.734 "traddr": "10.0.0.1", 00:12:51.734 "trsvcid": "40568" 00:12:51.734 }, 00:12:51.734 "auth": { 00:12:51.734 "state": "completed", 00:12:51.734 "digest": "sha256", 00:12:51.734 "dhgroup": "ffdhe8192" 00:12:51.734 } 00:12:51.734 } 00:12:51.734 ]' 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.734 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.007 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:52.007 08:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:12:52.574 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.833 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:52.833 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.833 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.833 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.833 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.833 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:52.833 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.092 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.660 00:12:53.660 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.660 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.660 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.229 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.229 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.229 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.229 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.229 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.229 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.229 { 00:12:54.229 "cntlid": 43, 00:12:54.229 "qid": 0, 00:12:54.229 "state": "enabled", 00:12:54.229 "thread": "nvmf_tgt_poll_group_000", 00:12:54.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:54.229 "listen_address": { 00:12:54.229 "trtype": "TCP", 00:12:54.229 "adrfam": "IPv4", 00:12:54.229 "traddr": "10.0.0.3", 00:12:54.229 "trsvcid": "4420" 00:12:54.229 }, 00:12:54.229 "peer_address": { 00:12:54.229 "trtype": "TCP", 00:12:54.229 "adrfam": "IPv4", 00:12:54.229 "traddr": "10.0.0.1", 00:12:54.229 "trsvcid": "40598" 00:12:54.229 }, 00:12:54.229 "auth": { 00:12:54.229 "state": "completed", 00:12:54.229 "digest": "sha256", 00:12:54.229 "dhgroup": "ffdhe8192" 00:12:54.229 } 00:12:54.229 } 00:12:54.229 ]' 00:12:54.229 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.229 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:54.229 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.229 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:54.229 08:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.229 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.229 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.229 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.489 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:54.489 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:12:55.425 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.425 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:55.425 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.425 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.425 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.425 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.425 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:55.425 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.685 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.253 00:12:56.253 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.253 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.253 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.511 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.511 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.511 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.511 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.511 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.511 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.511 { 00:12:56.511 "cntlid": 45, 00:12:56.511 "qid": 0, 00:12:56.511 "state": "enabled", 00:12:56.511 "thread": "nvmf_tgt_poll_group_000", 00:12:56.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:56.511 "listen_address": { 00:12:56.511 "trtype": "TCP", 00:12:56.511 "adrfam": "IPv4", 00:12:56.511 "traddr": "10.0.0.3", 00:12:56.511 "trsvcid": "4420" 00:12:56.511 }, 00:12:56.511 "peer_address": { 00:12:56.511 "trtype": "TCP", 00:12:56.512 "adrfam": "IPv4", 00:12:56.512 "traddr": "10.0.0.1", 00:12:56.512 "trsvcid": "40628" 00:12:56.512 }, 00:12:56.512 "auth": { 00:12:56.512 "state": "completed", 00:12:56.512 "digest": "sha256", 00:12:56.512 "dhgroup": "ffdhe8192" 00:12:56.512 } 00:12:56.512 } 00:12:56.512 ]' 00:12:56.512 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.512 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:56.512 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.770 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:56.770 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.770 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.770 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.770 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.029 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:57.029 08:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:12:57.645 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.904 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:12:57.905 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.905 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.905 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.905 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.905 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:57.905 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.165 08:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.733 00:12:58.733 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.733 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.733 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.301 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.301 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.301 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.301 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.301 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.301 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.301 { 00:12:59.301 "cntlid": 47, 00:12:59.301 "qid": 0, 00:12:59.301 "state": "enabled", 00:12:59.301 "thread": "nvmf_tgt_poll_group_000", 00:12:59.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:12:59.301 "listen_address": { 00:12:59.301 "trtype": "TCP", 00:12:59.301 "adrfam": "IPv4", 00:12:59.301 "traddr": "10.0.0.3", 00:12:59.301 "trsvcid": "4420" 00:12:59.301 }, 00:12:59.301 "peer_address": { 00:12:59.301 "trtype": "TCP", 00:12:59.301 "adrfam": "IPv4", 00:12:59.301 "traddr": "10.0.0.1", 00:12:59.301 "trsvcid": "42090" 00:12:59.301 }, 00:12:59.301 "auth": { 00:12:59.301 "state": "completed", 00:12:59.301 "digest": "sha256", 00:12:59.301 "dhgroup": "ffdhe8192" 00:12:59.301 } 00:12:59.301 } 00:12:59.301 ]' 00:12:59.301 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.301 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.301 08:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.301 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:59.301 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.301 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.301 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.301 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.560 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:12:59.560 08:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:00.494 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.494 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:00.494 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.494 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.494 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.494 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:00.494 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:00.494 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.494 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:00.494 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.751 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.008 00:13:01.008 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.008 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.008 08:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.266 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.266 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.266 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.266 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.266 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.266 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.266 { 00:13:01.266 "cntlid": 49, 00:13:01.266 "qid": 0, 00:13:01.266 "state": "enabled", 00:13:01.266 "thread": "nvmf_tgt_poll_group_000", 00:13:01.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:01.266 "listen_address": { 00:13:01.266 "trtype": "TCP", 00:13:01.266 "adrfam": "IPv4", 00:13:01.266 "traddr": "10.0.0.3", 00:13:01.266 "trsvcid": "4420" 00:13:01.266 }, 00:13:01.266 "peer_address": { 00:13:01.266 "trtype": "TCP", 00:13:01.266 "adrfam": "IPv4", 00:13:01.266 "traddr": "10.0.0.1", 00:13:01.266 "trsvcid": "42128" 00:13:01.266 }, 00:13:01.266 "auth": { 00:13:01.266 "state": "completed", 00:13:01.266 "digest": "sha384", 00:13:01.266 "dhgroup": "null" 00:13:01.266 } 00:13:01.266 } 00:13:01.266 ]' 00:13:01.266 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.534 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:01.534 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.534 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:01.534 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.534 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.534 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.534 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.792 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:01.792 08:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:02.724 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.724 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:02.724 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.724 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.724 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.724 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.724 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:02.724 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.982 08:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.240 00:13:03.240 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.240 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.240 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.807 { 00:13:03.807 "cntlid": 51, 00:13:03.807 "qid": 0, 00:13:03.807 "state": "enabled", 00:13:03.807 "thread": "nvmf_tgt_poll_group_000", 00:13:03.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:03.807 "listen_address": { 00:13:03.807 "trtype": "TCP", 00:13:03.807 "adrfam": "IPv4", 00:13:03.807 "traddr": "10.0.0.3", 00:13:03.807 "trsvcid": "4420" 00:13:03.807 }, 00:13:03.807 "peer_address": { 00:13:03.807 "trtype": "TCP", 00:13:03.807 "adrfam": "IPv4", 00:13:03.807 "traddr": "10.0.0.1", 00:13:03.807 "trsvcid": "42154" 00:13:03.807 }, 00:13:03.807 "auth": { 00:13:03.807 "state": "completed", 00:13:03.807 "digest": "sha384", 00:13:03.807 "dhgroup": "null" 00:13:03.807 } 00:13:03.807 } 00:13:03.807 ]' 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.807 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.066 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:04.066 08:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:05.001 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.001 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:05.001 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.001 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.001 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.001 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.001 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:05.001 08:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.568 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.827 00:13:05.827 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.827 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.827 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.086 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.086 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.086 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.086 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.344 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.344 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.344 { 00:13:06.344 "cntlid": 53, 00:13:06.344 "qid": 0, 00:13:06.344 "state": "enabled", 00:13:06.344 "thread": "nvmf_tgt_poll_group_000", 00:13:06.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:06.344 "listen_address": { 00:13:06.344 "trtype": "TCP", 00:13:06.344 "adrfam": "IPv4", 00:13:06.344 "traddr": "10.0.0.3", 00:13:06.344 "trsvcid": "4420" 00:13:06.344 }, 00:13:06.344 "peer_address": { 00:13:06.344 "trtype": "TCP", 00:13:06.344 "adrfam": "IPv4", 00:13:06.344 "traddr": "10.0.0.1", 00:13:06.344 "trsvcid": "42180" 00:13:06.344 }, 00:13:06.344 "auth": { 00:13:06.344 "state": "completed", 00:13:06.344 "digest": "sha384", 00:13:06.344 "dhgroup": "null" 00:13:06.344 } 00:13:06.344 } 00:13:06.344 ]' 00:13:06.344 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.344 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:06.344 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.344 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:06.344 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.344 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.345 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.345 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.602 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:06.602 08:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:07.535 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.535 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:07.535 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.535 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.535 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.535 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.535 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:07.535 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.793 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.051 00:13:08.051 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.051 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.051 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.616 { 00:13:08.616 "cntlid": 55, 00:13:08.616 "qid": 0, 00:13:08.616 "state": "enabled", 00:13:08.616 "thread": "nvmf_tgt_poll_group_000", 00:13:08.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:08.616 "listen_address": { 00:13:08.616 "trtype": "TCP", 00:13:08.616 "adrfam": "IPv4", 00:13:08.616 "traddr": "10.0.0.3", 00:13:08.616 "trsvcid": "4420" 00:13:08.616 }, 00:13:08.616 "peer_address": { 00:13:08.616 "trtype": "TCP", 00:13:08.616 "adrfam": "IPv4", 00:13:08.616 "traddr": "10.0.0.1", 00:13:08.616 "trsvcid": "46884" 00:13:08.616 }, 00:13:08.616 "auth": { 00:13:08.616 "state": "completed", 00:13:08.616 "digest": "sha384", 00:13:08.616 "dhgroup": "null" 00:13:08.616 } 00:13:08.616 } 00:13:08.616 ]' 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.616 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.182 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:09.182 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:10.116 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.116 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:10.116 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.116 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.116 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.116 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.116 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.116 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:10.116 08:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.116 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.748 00:13:10.748 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.748 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.748 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.007 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.007 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.007 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.007 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.007 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.007 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.007 { 00:13:11.007 "cntlid": 57, 00:13:11.007 "qid": 0, 00:13:11.007 "state": "enabled", 00:13:11.007 "thread": "nvmf_tgt_poll_group_000", 00:13:11.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:11.007 "listen_address": { 00:13:11.007 "trtype": "TCP", 00:13:11.007 "adrfam": "IPv4", 00:13:11.007 "traddr": "10.0.0.3", 00:13:11.007 "trsvcid": "4420" 00:13:11.007 }, 00:13:11.007 "peer_address": { 00:13:11.007 "trtype": "TCP", 00:13:11.007 "adrfam": "IPv4", 00:13:11.007 "traddr": "10.0.0.1", 00:13:11.007 "trsvcid": "46924" 00:13:11.007 }, 00:13:11.007 "auth": { 00:13:11.007 "state": "completed", 00:13:11.007 "digest": "sha384", 00:13:11.007 "dhgroup": "ffdhe2048" 00:13:11.007 } 00:13:11.007 } 00:13:11.007 ]' 00:13:11.007 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.265 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.265 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.265 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:11.265 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.265 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.265 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.265 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.832 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:11.832 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:12.399 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.656 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:12.656 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.656 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.656 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.656 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.656 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:12.656 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.914 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.480 00:13:13.480 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.480 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.480 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.740 { 00:13:13.740 "cntlid": 59, 00:13:13.740 "qid": 0, 00:13:13.740 "state": "enabled", 00:13:13.740 "thread": "nvmf_tgt_poll_group_000", 00:13:13.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:13.740 "listen_address": { 00:13:13.740 "trtype": "TCP", 00:13:13.740 "adrfam": "IPv4", 00:13:13.740 "traddr": "10.0.0.3", 00:13:13.740 "trsvcid": "4420" 00:13:13.740 }, 00:13:13.740 "peer_address": { 00:13:13.740 "trtype": "TCP", 00:13:13.740 "adrfam": "IPv4", 00:13:13.740 "traddr": "10.0.0.1", 00:13:13.740 "trsvcid": "46958" 00:13:13.740 }, 00:13:13.740 "auth": { 00:13:13.740 "state": "completed", 00:13:13.740 "digest": "sha384", 00:13:13.740 "dhgroup": "ffdhe2048" 00:13:13.740 } 00:13:13.740 } 00:13:13.740 ]' 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.740 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.307 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:14.307 08:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:14.874 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.874 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:14.874 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.874 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.874 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.874 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.874 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:14.874 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.193 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.774 00:13:15.774 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.774 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.774 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.033 { 00:13:16.033 "cntlid": 61, 00:13:16.033 "qid": 0, 00:13:16.033 "state": "enabled", 00:13:16.033 "thread": "nvmf_tgt_poll_group_000", 00:13:16.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:16.033 "listen_address": { 00:13:16.033 "trtype": "TCP", 00:13:16.033 "adrfam": "IPv4", 00:13:16.033 "traddr": "10.0.0.3", 00:13:16.033 "trsvcid": "4420" 00:13:16.033 }, 00:13:16.033 "peer_address": { 00:13:16.033 "trtype": "TCP", 00:13:16.033 "adrfam": "IPv4", 00:13:16.033 "traddr": "10.0.0.1", 00:13:16.033 "trsvcid": "46996" 00:13:16.033 }, 00:13:16.033 "auth": { 00:13:16.033 "state": "completed", 00:13:16.033 "digest": "sha384", 00:13:16.033 "dhgroup": "ffdhe2048" 00:13:16.033 } 00:13:16.033 } 00:13:16.033 ]' 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.033 08:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.599 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:16.599 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:17.173 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.173 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:17.173 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.173 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.173 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.173 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.173 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:17.173 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.741 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.999 00:13:17.999 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.999 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.999 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.259 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.259 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.259 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.259 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.259 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.259 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.259 { 00:13:18.259 "cntlid": 63, 00:13:18.259 "qid": 0, 00:13:18.259 "state": "enabled", 00:13:18.259 "thread": "nvmf_tgt_poll_group_000", 00:13:18.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:18.259 "listen_address": { 00:13:18.259 "trtype": "TCP", 00:13:18.259 "adrfam": "IPv4", 00:13:18.259 "traddr": "10.0.0.3", 00:13:18.259 "trsvcid": "4420" 00:13:18.259 }, 00:13:18.259 "peer_address": { 00:13:18.259 "trtype": "TCP", 00:13:18.259 "adrfam": "IPv4", 00:13:18.259 "traddr": "10.0.0.1", 00:13:18.259 "trsvcid": "42170" 00:13:18.259 }, 00:13:18.259 "auth": { 00:13:18.259 "state": "completed", 00:13:18.259 "digest": "sha384", 00:13:18.259 "dhgroup": "ffdhe2048" 00:13:18.259 } 00:13:18.259 } 00:13:18.259 ]' 00:13:18.259 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.260 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:18.260 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.260 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:18.260 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.260 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.260 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.260 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.520 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:18.520 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:19.455 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.455 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:19.455 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.455 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.455 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.455 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:19.455 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.455 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:19.455 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:19.455 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.456 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.023 00:13:20.023 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.023 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.023 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.282 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.282 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.282 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.282 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.282 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.282 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.282 { 00:13:20.282 "cntlid": 65, 00:13:20.282 "qid": 0, 00:13:20.282 "state": "enabled", 00:13:20.282 "thread": "nvmf_tgt_poll_group_000", 00:13:20.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:20.282 "listen_address": { 00:13:20.282 "trtype": "TCP", 00:13:20.282 "adrfam": "IPv4", 00:13:20.282 "traddr": "10.0.0.3", 00:13:20.282 "trsvcid": "4420" 00:13:20.282 }, 00:13:20.282 "peer_address": { 00:13:20.282 "trtype": "TCP", 00:13:20.282 "adrfam": "IPv4", 00:13:20.282 "traddr": "10.0.0.1", 00:13:20.282 "trsvcid": "42190" 00:13:20.282 }, 00:13:20.282 "auth": { 00:13:20.282 "state": "completed", 00:13:20.282 "digest": "sha384", 00:13:20.282 "dhgroup": "ffdhe3072" 00:13:20.282 } 00:13:20.282 } 00:13:20.282 ]' 00:13:20.282 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.282 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:20.282 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.282 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:20.282 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.541 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.541 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.541 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.799 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:20.799 08:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.734 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.301 00:13:22.301 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.301 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.301 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.559 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.559 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.559 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.559 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.559 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.559 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.559 { 00:13:22.559 "cntlid": 67, 00:13:22.559 "qid": 0, 00:13:22.559 "state": "enabled", 00:13:22.559 "thread": "nvmf_tgt_poll_group_000", 00:13:22.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:22.559 "listen_address": { 00:13:22.559 "trtype": "TCP", 00:13:22.559 "adrfam": "IPv4", 00:13:22.559 "traddr": "10.0.0.3", 00:13:22.559 "trsvcid": "4420" 00:13:22.559 }, 00:13:22.559 "peer_address": { 00:13:22.559 "trtype": "TCP", 00:13:22.559 "adrfam": "IPv4", 00:13:22.559 "traddr": "10.0.0.1", 00:13:22.559 "trsvcid": "42230" 00:13:22.559 }, 00:13:22.559 "auth": { 00:13:22.559 "state": "completed", 00:13:22.559 "digest": "sha384", 00:13:22.559 "dhgroup": "ffdhe3072" 00:13:22.559 } 00:13:22.559 } 00:13:22.559 ]' 00:13:22.559 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.559 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:22.559 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.817 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:22.817 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.817 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.817 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.817 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.076 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:23.076 08:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:24.013 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.013 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:24.013 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.013 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.013 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.013 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.013 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:24.013 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.271 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.859 00:13:24.859 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.859 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.859 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.135 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.135 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.135 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.135 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.135 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.135 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.135 { 00:13:25.135 "cntlid": 69, 00:13:25.135 "qid": 0, 00:13:25.135 "state": "enabled", 00:13:25.135 "thread": "nvmf_tgt_poll_group_000", 00:13:25.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:25.135 "listen_address": { 00:13:25.135 "trtype": "TCP", 00:13:25.135 "adrfam": "IPv4", 00:13:25.135 "traddr": "10.0.0.3", 00:13:25.135 "trsvcid": "4420" 00:13:25.135 }, 00:13:25.135 "peer_address": { 00:13:25.135 "trtype": "TCP", 00:13:25.135 "adrfam": "IPv4", 00:13:25.135 "traddr": "10.0.0.1", 00:13:25.135 "trsvcid": "42264" 00:13:25.135 }, 00:13:25.135 "auth": { 00:13:25.135 "state": "completed", 00:13:25.135 "digest": "sha384", 00:13:25.135 "dhgroup": "ffdhe3072" 00:13:25.135 } 00:13:25.135 } 00:13:25.135 ]' 00:13:25.135 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.135 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:25.135 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.135 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:25.135 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.135 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.135 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.135 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.702 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:25.702 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:26.270 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.270 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:26.270 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.270 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.270 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.270 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.270 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:26.270 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:26.837 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:26.837 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.837 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:26.837 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:26.837 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:26.837 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.837 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:13:26.837 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.838 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.838 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.838 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:26.838 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.838 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.096 00:13:27.096 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.096 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.096 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.355 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.355 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.355 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.355 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.355 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.355 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.355 { 00:13:27.355 "cntlid": 71, 00:13:27.355 "qid": 0, 00:13:27.355 "state": "enabled", 00:13:27.355 "thread": "nvmf_tgt_poll_group_000", 00:13:27.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:27.355 "listen_address": { 00:13:27.355 "trtype": "TCP", 00:13:27.355 "adrfam": "IPv4", 00:13:27.355 "traddr": "10.0.0.3", 00:13:27.355 "trsvcid": "4420" 00:13:27.355 }, 00:13:27.355 "peer_address": { 00:13:27.355 "trtype": "TCP", 00:13:27.355 "adrfam": "IPv4", 00:13:27.355 "traddr": "10.0.0.1", 00:13:27.355 "trsvcid": "44708" 00:13:27.355 }, 00:13:27.355 "auth": { 00:13:27.355 "state": "completed", 00:13:27.355 "digest": "sha384", 00:13:27.355 "dhgroup": "ffdhe3072" 00:13:27.355 } 00:13:27.355 } 00:13:27.355 ]' 00:13:27.355 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.355 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:27.355 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.615 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:27.615 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.615 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.615 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.615 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.874 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:27.874 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:28.809 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.809 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:28.809 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.809 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.809 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.809 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:28.809 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.809 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:28.809 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.068 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.328 00:13:29.328 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.328 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.328 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.894 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.895 { 00:13:29.895 "cntlid": 73, 00:13:29.895 "qid": 0, 00:13:29.895 "state": "enabled", 00:13:29.895 "thread": "nvmf_tgt_poll_group_000", 00:13:29.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:29.895 "listen_address": { 00:13:29.895 "trtype": "TCP", 00:13:29.895 "adrfam": "IPv4", 00:13:29.895 "traddr": "10.0.0.3", 00:13:29.895 "trsvcid": "4420" 00:13:29.895 }, 00:13:29.895 "peer_address": { 00:13:29.895 "trtype": "TCP", 00:13:29.895 "adrfam": "IPv4", 00:13:29.895 "traddr": "10.0.0.1", 00:13:29.895 "trsvcid": "44752" 00:13:29.895 }, 00:13:29.895 "auth": { 00:13:29.895 "state": "completed", 00:13:29.895 "digest": "sha384", 00:13:29.895 "dhgroup": "ffdhe4096" 00:13:29.895 } 00:13:29.895 } 00:13:29.895 ]' 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.895 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.153 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:30.153 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.089 08:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.655 00:13:31.655 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.655 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.655 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.912 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.912 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.912 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.912 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.912 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.912 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.912 { 00:13:31.912 "cntlid": 75, 00:13:31.912 "qid": 0, 00:13:31.912 "state": "enabled", 00:13:31.912 "thread": "nvmf_tgt_poll_group_000", 00:13:31.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:31.912 "listen_address": { 00:13:31.912 "trtype": "TCP", 00:13:31.912 "adrfam": "IPv4", 00:13:31.912 "traddr": "10.0.0.3", 00:13:31.912 "trsvcid": "4420" 00:13:31.912 }, 00:13:31.912 "peer_address": { 00:13:31.912 "trtype": "TCP", 00:13:31.912 "adrfam": "IPv4", 00:13:31.912 "traddr": "10.0.0.1", 00:13:31.912 "trsvcid": "44788" 00:13:31.912 }, 00:13:31.912 "auth": { 00:13:31.912 "state": "completed", 00:13:31.912 "digest": "sha384", 00:13:31.912 "dhgroup": "ffdhe4096" 00:13:31.912 } 00:13:31.912 } 00:13:31.912 ]' 00:13:31.912 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.170 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:32.170 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.170 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:32.170 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.170 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.170 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.170 08:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.428 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:32.428 08:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:33.807 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.807 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:33.807 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.807 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.807 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.807 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.807 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:33.807 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.065 08:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.631 00:13:34.631 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.631 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.631 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.889 { 00:13:34.889 "cntlid": 77, 00:13:34.889 "qid": 0, 00:13:34.889 "state": "enabled", 00:13:34.889 "thread": "nvmf_tgt_poll_group_000", 00:13:34.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:34.889 "listen_address": { 00:13:34.889 "trtype": "TCP", 00:13:34.889 "adrfam": "IPv4", 00:13:34.889 "traddr": "10.0.0.3", 00:13:34.889 "trsvcid": "4420" 00:13:34.889 }, 00:13:34.889 "peer_address": { 00:13:34.889 "trtype": "TCP", 00:13:34.889 "adrfam": "IPv4", 00:13:34.889 "traddr": "10.0.0.1", 00:13:34.889 "trsvcid": "44816" 00:13:34.889 }, 00:13:34.889 "auth": { 00:13:34.889 "state": "completed", 00:13:34.889 "digest": "sha384", 00:13:34.889 "dhgroup": "ffdhe4096" 00:13:34.889 } 00:13:34.889 } 00:13:34.889 ]' 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.889 08:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.456 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:35.456 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:36.021 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.021 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:36.021 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.021 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.021 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.021 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.021 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:36.021 08:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:36.280 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:36.848 00:13:36.848 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.848 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.848 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.106 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.106 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.106 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.106 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.106 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.106 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.106 { 00:13:37.106 "cntlid": 79, 00:13:37.106 "qid": 0, 00:13:37.106 "state": "enabled", 00:13:37.106 "thread": "nvmf_tgt_poll_group_000", 00:13:37.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:37.106 "listen_address": { 00:13:37.106 "trtype": "TCP", 00:13:37.106 "adrfam": "IPv4", 00:13:37.106 "traddr": "10.0.0.3", 00:13:37.106 "trsvcid": "4420" 00:13:37.106 }, 00:13:37.106 "peer_address": { 00:13:37.106 "trtype": "TCP", 00:13:37.106 "adrfam": "IPv4", 00:13:37.106 "traddr": "10.0.0.1", 00:13:37.106 "trsvcid": "36664" 00:13:37.106 }, 00:13:37.106 "auth": { 00:13:37.106 "state": "completed", 00:13:37.106 "digest": "sha384", 00:13:37.106 "dhgroup": "ffdhe4096" 00:13:37.106 } 00:13:37.106 } 00:13:37.106 ]' 00:13:37.106 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.106 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:37.106 08:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.106 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:37.106 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.364 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.364 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.364 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.622 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:37.622 08:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:38.557 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.558 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.125 00:13:39.125 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.125 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.125 08:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.693 { 00:13:39.693 "cntlid": 81, 00:13:39.693 "qid": 0, 00:13:39.693 "state": "enabled", 00:13:39.693 "thread": "nvmf_tgt_poll_group_000", 00:13:39.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:39.693 "listen_address": { 00:13:39.693 "trtype": "TCP", 00:13:39.693 "adrfam": "IPv4", 00:13:39.693 "traddr": "10.0.0.3", 00:13:39.693 "trsvcid": "4420" 00:13:39.693 }, 00:13:39.693 "peer_address": { 00:13:39.693 "trtype": "TCP", 00:13:39.693 "adrfam": "IPv4", 00:13:39.693 "traddr": "10.0.0.1", 00:13:39.693 "trsvcid": "36690" 00:13:39.693 }, 00:13:39.693 "auth": { 00:13:39.693 "state": "completed", 00:13:39.693 "digest": "sha384", 00:13:39.693 "dhgroup": "ffdhe6144" 00:13:39.693 } 00:13:39.693 } 00:13:39.693 ]' 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.693 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.952 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:39.952 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:40.887 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.887 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:40.887 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.887 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.887 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.887 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.887 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:40.887 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.145 08:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.404 00:13:41.663 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.663 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.663 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.921 { 00:13:41.921 "cntlid": 83, 00:13:41.921 "qid": 0, 00:13:41.921 "state": "enabled", 00:13:41.921 "thread": "nvmf_tgt_poll_group_000", 00:13:41.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:41.921 "listen_address": { 00:13:41.921 "trtype": "TCP", 00:13:41.921 "adrfam": "IPv4", 00:13:41.921 "traddr": "10.0.0.3", 00:13:41.921 "trsvcid": "4420" 00:13:41.921 }, 00:13:41.921 "peer_address": { 00:13:41.921 "trtype": "TCP", 00:13:41.921 "adrfam": "IPv4", 00:13:41.921 "traddr": "10.0.0.1", 00:13:41.921 "trsvcid": "36722" 00:13:41.921 }, 00:13:41.921 "auth": { 00:13:41.921 "state": "completed", 00:13:41.921 "digest": "sha384", 00:13:41.921 "dhgroup": "ffdhe6144" 00:13:41.921 } 00:13:41.921 } 00:13:41.921 ]' 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.921 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.488 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:42.488 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:43.056 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.056 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:43.056 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.056 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.056 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.056 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.056 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:43.056 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.317 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.884 00:13:43.884 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.884 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.884 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.450 { 00:13:44.450 "cntlid": 85, 00:13:44.450 "qid": 0, 00:13:44.450 "state": "enabled", 00:13:44.450 "thread": "nvmf_tgt_poll_group_000", 00:13:44.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:44.450 "listen_address": { 00:13:44.450 "trtype": "TCP", 00:13:44.450 "adrfam": "IPv4", 00:13:44.450 "traddr": "10.0.0.3", 00:13:44.450 "trsvcid": "4420" 00:13:44.450 }, 00:13:44.450 "peer_address": { 00:13:44.450 "trtype": "TCP", 00:13:44.450 "adrfam": "IPv4", 00:13:44.450 "traddr": "10.0.0.1", 00:13:44.450 "trsvcid": "36740" 00:13:44.450 }, 00:13:44.450 "auth": { 00:13:44.450 "state": "completed", 00:13:44.450 "digest": "sha384", 00:13:44.450 "dhgroup": "ffdhe6144" 00:13:44.450 } 00:13:44.450 } 00:13:44.450 ]' 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.450 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.709 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:44.709 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:45.693 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.693 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:45.693 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.693 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.693 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.693 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.693 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:45.693 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:45.952 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:45.952 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.952 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:45.952 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:45.952 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:45.952 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.952 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:13:45.952 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.952 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.952 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.953 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:45.953 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.953 08:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.520 00:13:46.520 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.520 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.520 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.777 { 00:13:46.777 "cntlid": 87, 00:13:46.777 "qid": 0, 00:13:46.777 "state": "enabled", 00:13:46.777 "thread": "nvmf_tgt_poll_group_000", 00:13:46.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:46.777 "listen_address": { 00:13:46.777 "trtype": "TCP", 00:13:46.777 "adrfam": "IPv4", 00:13:46.777 "traddr": "10.0.0.3", 00:13:46.777 "trsvcid": "4420" 00:13:46.777 }, 00:13:46.777 "peer_address": { 00:13:46.777 "trtype": "TCP", 00:13:46.777 "adrfam": "IPv4", 00:13:46.777 "traddr": "10.0.0.1", 00:13:46.777 "trsvcid": "35240" 00:13:46.777 }, 00:13:46.777 "auth": { 00:13:46.777 "state": "completed", 00:13:46.777 "digest": "sha384", 00:13:46.777 "dhgroup": "ffdhe6144" 00:13:46.777 } 00:13:46.777 } 00:13:46.777 ]' 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.777 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.036 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:47.036 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:47.972 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.972 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:47.972 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.973 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.973 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.973 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.973 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.973 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:47.973 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.232 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.168 00:13:49.168 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.168 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.168 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.428 { 00:13:49.428 "cntlid": 89, 00:13:49.428 "qid": 0, 00:13:49.428 "state": "enabled", 00:13:49.428 "thread": "nvmf_tgt_poll_group_000", 00:13:49.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:49.428 "listen_address": { 00:13:49.428 "trtype": "TCP", 00:13:49.428 "adrfam": "IPv4", 00:13:49.428 "traddr": "10.0.0.3", 00:13:49.428 "trsvcid": "4420" 00:13:49.428 }, 00:13:49.428 "peer_address": { 00:13:49.428 "trtype": "TCP", 00:13:49.428 "adrfam": "IPv4", 00:13:49.428 "traddr": "10.0.0.1", 00:13:49.428 "trsvcid": "35262" 00:13:49.428 }, 00:13:49.428 "auth": { 00:13:49.428 "state": "completed", 00:13:49.428 "digest": "sha384", 00:13:49.428 "dhgroup": "ffdhe8192" 00:13:49.428 } 00:13:49.428 } 00:13:49.428 ]' 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.428 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.686 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:49.686 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:50.698 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.698 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:50.698 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.698 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.698 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.698 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.698 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:50.698 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.958 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.529 00:13:51.529 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.529 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.529 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.790 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.790 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.790 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.790 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.790 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.790 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.790 { 00:13:51.790 "cntlid": 91, 00:13:51.790 "qid": 0, 00:13:51.791 "state": "enabled", 00:13:51.791 "thread": "nvmf_tgt_poll_group_000", 00:13:51.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:51.791 "listen_address": { 00:13:51.791 "trtype": "TCP", 00:13:51.791 "adrfam": "IPv4", 00:13:51.791 "traddr": "10.0.0.3", 00:13:51.791 "trsvcid": "4420" 00:13:51.791 }, 00:13:51.791 "peer_address": { 00:13:51.791 "trtype": "TCP", 00:13:51.791 "adrfam": "IPv4", 00:13:51.791 "traddr": "10.0.0.1", 00:13:51.791 "trsvcid": "35298" 00:13:51.791 }, 00:13:51.791 "auth": { 00:13:51.791 "state": "completed", 00:13:51.791 "digest": "sha384", 00:13:51.791 "dhgroup": "ffdhe8192" 00:13:51.791 } 00:13:51.791 } 00:13:51.791 ]' 00:13:51.791 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.791 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.791 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.050 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:52.050 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.050 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.050 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.050 08:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.309 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:52.309 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:13:53.246 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.246 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:53.246 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.246 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.246 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.246 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.246 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:53.246 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:53.246 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:53.246 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.246 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:53.246 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:53.246 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:53.246 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.246 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.246 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.246 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.506 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.506 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.506 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.506 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.074 00:13:54.075 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.075 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.075 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.333 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.334 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.334 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.334 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.334 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.334 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.334 { 00:13:54.334 "cntlid": 93, 00:13:54.334 "qid": 0, 00:13:54.334 "state": "enabled", 00:13:54.334 "thread": "nvmf_tgt_poll_group_000", 00:13:54.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:54.334 "listen_address": { 00:13:54.334 "trtype": "TCP", 00:13:54.334 "adrfam": "IPv4", 00:13:54.334 "traddr": "10.0.0.3", 00:13:54.334 "trsvcid": "4420" 00:13:54.334 }, 00:13:54.334 "peer_address": { 00:13:54.334 "trtype": "TCP", 00:13:54.334 "adrfam": "IPv4", 00:13:54.334 "traddr": "10.0.0.1", 00:13:54.334 "trsvcid": "35312" 00:13:54.334 }, 00:13:54.334 "auth": { 00:13:54.334 "state": "completed", 00:13:54.334 "digest": "sha384", 00:13:54.334 "dhgroup": "ffdhe8192" 00:13:54.334 } 00:13:54.334 } 00:13:54.334 ]' 00:13:54.334 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.593 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:54.593 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.593 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:54.593 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.593 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.593 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.593 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.853 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:54.853 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:13:55.788 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.788 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:55.788 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.788 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.788 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.788 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.788 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:55.788 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.047 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.615 00:13:56.615 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.615 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.615 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.182 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.182 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.182 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.182 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.182 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.182 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.182 { 00:13:57.182 "cntlid": 95, 00:13:57.182 "qid": 0, 00:13:57.182 "state": "enabled", 00:13:57.182 "thread": "nvmf_tgt_poll_group_000", 00:13:57.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:57.182 "listen_address": { 00:13:57.182 "trtype": "TCP", 00:13:57.182 "adrfam": "IPv4", 00:13:57.183 "traddr": "10.0.0.3", 00:13:57.183 "trsvcid": "4420" 00:13:57.183 }, 00:13:57.183 "peer_address": { 00:13:57.183 "trtype": "TCP", 00:13:57.183 "adrfam": "IPv4", 00:13:57.183 "traddr": "10.0.0.1", 00:13:57.183 "trsvcid": "50850" 00:13:57.183 }, 00:13:57.183 "auth": { 00:13:57.183 "state": "completed", 00:13:57.183 "digest": "sha384", 00:13:57.183 "dhgroup": "ffdhe8192" 00:13:57.183 } 00:13:57.183 } 00:13:57.183 ]' 00:13:57.183 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.183 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.183 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.183 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:57.183 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.183 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.183 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.183 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.441 08:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:57.441 08:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:13:58.377 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.377 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:13:58.377 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.377 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.377 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.377 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:58.377 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:58.377 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.377 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:58.377 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.635 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.894 00:13:58.894 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.894 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.894 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.153 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.153 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.153 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.153 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.412 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.412 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.412 { 00:13:59.412 "cntlid": 97, 00:13:59.412 "qid": 0, 00:13:59.412 "state": "enabled", 00:13:59.412 "thread": "nvmf_tgt_poll_group_000", 00:13:59.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:13:59.412 "listen_address": { 00:13:59.412 "trtype": "TCP", 00:13:59.412 "adrfam": "IPv4", 00:13:59.412 "traddr": "10.0.0.3", 00:13:59.412 "trsvcid": "4420" 00:13:59.412 }, 00:13:59.412 "peer_address": { 00:13:59.412 "trtype": "TCP", 00:13:59.412 "adrfam": "IPv4", 00:13:59.412 "traddr": "10.0.0.1", 00:13:59.412 "trsvcid": "50876" 00:13:59.412 }, 00:13:59.412 "auth": { 00:13:59.412 "state": "completed", 00:13:59.412 "digest": "sha512", 00:13:59.412 "dhgroup": "null" 00:13:59.412 } 00:13:59.412 } 00:13:59.412 ]' 00:13:59.412 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.412 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.412 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.412 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:59.412 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.412 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.412 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.412 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.690 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:13:59.690 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.627 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.196 00:14:01.196 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.196 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.196 08:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.454 { 00:14:01.454 "cntlid": 99, 00:14:01.454 "qid": 0, 00:14:01.454 "state": "enabled", 00:14:01.454 "thread": "nvmf_tgt_poll_group_000", 00:14:01.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:01.454 "listen_address": { 00:14:01.454 "trtype": "TCP", 00:14:01.454 "adrfam": "IPv4", 00:14:01.454 "traddr": "10.0.0.3", 00:14:01.454 "trsvcid": "4420" 00:14:01.454 }, 00:14:01.454 "peer_address": { 00:14:01.454 "trtype": "TCP", 00:14:01.454 "adrfam": "IPv4", 00:14:01.454 "traddr": "10.0.0.1", 00:14:01.454 "trsvcid": "50904" 00:14:01.454 }, 00:14:01.454 "auth": { 00:14:01.454 "state": "completed", 00:14:01.454 "digest": "sha512", 00:14:01.454 "dhgroup": "null" 00:14:01.454 } 00:14:01.454 } 00:14:01.454 ]' 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.454 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.021 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:02.021 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:02.588 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.588 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:02.588 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.588 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.588 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.588 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.588 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:02.588 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.847 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.105 00:14:03.105 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.105 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.105 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.672 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.673 { 00:14:03.673 "cntlid": 101, 00:14:03.673 "qid": 0, 00:14:03.673 "state": "enabled", 00:14:03.673 "thread": "nvmf_tgt_poll_group_000", 00:14:03.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:03.673 "listen_address": { 00:14:03.673 "trtype": "TCP", 00:14:03.673 "adrfam": "IPv4", 00:14:03.673 "traddr": "10.0.0.3", 00:14:03.673 "trsvcid": "4420" 00:14:03.673 }, 00:14:03.673 "peer_address": { 00:14:03.673 "trtype": "TCP", 00:14:03.673 "adrfam": "IPv4", 00:14:03.673 "traddr": "10.0.0.1", 00:14:03.673 "trsvcid": "50934" 00:14:03.673 }, 00:14:03.673 "auth": { 00:14:03.673 "state": "completed", 00:14:03.673 "digest": "sha512", 00:14:03.673 "dhgroup": "null" 00:14:03.673 } 00:14:03.673 } 00:14:03.673 ]' 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.673 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.932 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:03.932 08:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:04.882 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.882 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:04.882 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.882 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.882 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.882 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.882 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:04.882 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:05.176 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:05.434 00:14:05.434 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.434 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.434 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.692 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.692 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.692 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.692 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.692 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.692 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.692 { 00:14:05.692 "cntlid": 103, 00:14:05.692 "qid": 0, 00:14:05.692 "state": "enabled", 00:14:05.692 "thread": "nvmf_tgt_poll_group_000", 00:14:05.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:05.692 "listen_address": { 00:14:05.692 "trtype": "TCP", 00:14:05.692 "adrfam": "IPv4", 00:14:05.692 "traddr": "10.0.0.3", 00:14:05.692 "trsvcid": "4420" 00:14:05.692 }, 00:14:05.692 "peer_address": { 00:14:05.692 "trtype": "TCP", 00:14:05.692 "adrfam": "IPv4", 00:14:05.692 "traddr": "10.0.0.1", 00:14:05.692 "trsvcid": "50964" 00:14:05.692 }, 00:14:05.692 "auth": { 00:14:05.692 "state": "completed", 00:14:05.692 "digest": "sha512", 00:14:05.692 "dhgroup": "null" 00:14:05.692 } 00:14:05.692 } 00:14:05.692 ]' 00:14:05.693 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.693 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:05.693 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.951 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:05.951 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.951 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.951 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.951 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.210 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:06.210 08:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:06.776 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.776 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:06.776 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.776 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.776 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.776 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:06.776 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.776 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:06.776 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.343 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.601 00:14:07.601 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.601 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.601 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.860 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.860 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.860 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.860 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.860 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.860 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.860 { 00:14:07.860 "cntlid": 105, 00:14:07.860 "qid": 0, 00:14:07.860 "state": "enabled", 00:14:07.860 "thread": "nvmf_tgt_poll_group_000", 00:14:07.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:07.860 "listen_address": { 00:14:07.860 "trtype": "TCP", 00:14:07.860 "adrfam": "IPv4", 00:14:07.860 "traddr": "10.0.0.3", 00:14:07.860 "trsvcid": "4420" 00:14:07.860 }, 00:14:07.860 "peer_address": { 00:14:07.860 "trtype": "TCP", 00:14:07.860 "adrfam": "IPv4", 00:14:07.860 "traddr": "10.0.0.1", 00:14:07.860 "trsvcid": "46976" 00:14:07.860 }, 00:14:07.860 "auth": { 00:14:07.860 "state": "completed", 00:14:07.860 "digest": "sha512", 00:14:07.860 "dhgroup": "ffdhe2048" 00:14:07.860 } 00:14:07.860 } 00:14:07.860 ]' 00:14:07.860 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.860 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:07.860 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.860 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.860 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.118 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.118 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.118 08:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.379 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:08.379 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:09.314 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.314 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:09.314 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.314 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.314 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.314 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.314 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:09.314 08:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.573 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.831 00:14:09.831 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.831 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.831 08:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.400 { 00:14:10.400 "cntlid": 107, 00:14:10.400 "qid": 0, 00:14:10.400 "state": "enabled", 00:14:10.400 "thread": "nvmf_tgt_poll_group_000", 00:14:10.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:10.400 "listen_address": { 00:14:10.400 "trtype": "TCP", 00:14:10.400 "adrfam": "IPv4", 00:14:10.400 "traddr": "10.0.0.3", 00:14:10.400 "trsvcid": "4420" 00:14:10.400 }, 00:14:10.400 "peer_address": { 00:14:10.400 "trtype": "TCP", 00:14:10.400 "adrfam": "IPv4", 00:14:10.400 "traddr": "10.0.0.1", 00:14:10.400 "trsvcid": "47002" 00:14:10.400 }, 00:14:10.400 "auth": { 00:14:10.400 "state": "completed", 00:14:10.400 "digest": "sha512", 00:14:10.400 "dhgroup": "ffdhe2048" 00:14:10.400 } 00:14:10.400 } 00:14:10.400 ]' 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.400 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.678 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:10.678 08:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.615 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.183 00:14:12.183 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.183 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.183 08:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.442 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.442 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.442 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.442 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.442 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.442 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.442 { 00:14:12.442 "cntlid": 109, 00:14:12.442 "qid": 0, 00:14:12.442 "state": "enabled", 00:14:12.442 "thread": "nvmf_tgt_poll_group_000", 00:14:12.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:12.442 "listen_address": { 00:14:12.442 "trtype": "TCP", 00:14:12.442 "adrfam": "IPv4", 00:14:12.442 "traddr": "10.0.0.3", 00:14:12.442 "trsvcid": "4420" 00:14:12.442 }, 00:14:12.442 "peer_address": { 00:14:12.442 "trtype": "TCP", 00:14:12.442 "adrfam": "IPv4", 00:14:12.442 "traddr": "10.0.0.1", 00:14:12.442 "trsvcid": "47014" 00:14:12.442 }, 00:14:12.442 "auth": { 00:14:12.442 "state": "completed", 00:14:12.442 "digest": "sha512", 00:14:12.442 "dhgroup": "ffdhe2048" 00:14:12.442 } 00:14:12.442 } 00:14:12.442 ]' 00:14:12.442 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.442 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:12.442 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.442 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:12.442 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.701 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.701 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.701 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.960 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:12.960 08:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:13.528 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.528 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:13.528 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.528 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.786 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.786 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.786 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:13.786 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:14.044 08:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:14.302 00:14:14.302 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.302 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.302 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.559 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.559 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.559 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.559 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.559 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.559 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.559 { 00:14:14.559 "cntlid": 111, 00:14:14.559 "qid": 0, 00:14:14.559 "state": "enabled", 00:14:14.559 "thread": "nvmf_tgt_poll_group_000", 00:14:14.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:14.559 "listen_address": { 00:14:14.559 "trtype": "TCP", 00:14:14.559 "adrfam": "IPv4", 00:14:14.559 "traddr": "10.0.0.3", 00:14:14.559 "trsvcid": "4420" 00:14:14.559 }, 00:14:14.559 "peer_address": { 00:14:14.559 "trtype": "TCP", 00:14:14.559 "adrfam": "IPv4", 00:14:14.559 "traddr": "10.0.0.1", 00:14:14.559 "trsvcid": "47040" 00:14:14.559 }, 00:14:14.559 "auth": { 00:14:14.559 "state": "completed", 00:14:14.559 "digest": "sha512", 00:14:14.559 "dhgroup": "ffdhe2048" 00:14:14.559 } 00:14:14.559 } 00:14:14.559 ]' 00:14:14.559 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.559 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:14.559 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.873 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:14.873 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.873 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.873 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.873 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.131 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:15.131 08:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.065 08:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.637 00:14:16.637 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.637 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.637 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.909 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.909 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.909 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.909 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.909 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.909 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.909 { 00:14:16.909 "cntlid": 113, 00:14:16.909 "qid": 0, 00:14:16.909 "state": "enabled", 00:14:16.909 "thread": "nvmf_tgt_poll_group_000", 00:14:16.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:16.909 "listen_address": { 00:14:16.909 "trtype": "TCP", 00:14:16.909 "adrfam": "IPv4", 00:14:16.909 "traddr": "10.0.0.3", 00:14:16.909 "trsvcid": "4420" 00:14:16.909 }, 00:14:16.909 "peer_address": { 00:14:16.909 "trtype": "TCP", 00:14:16.909 "adrfam": "IPv4", 00:14:16.909 "traddr": "10.0.0.1", 00:14:16.909 "trsvcid": "36920" 00:14:16.909 }, 00:14:16.909 "auth": { 00:14:16.909 "state": "completed", 00:14:16.909 "digest": "sha512", 00:14:16.909 "dhgroup": "ffdhe3072" 00:14:16.909 } 00:14:16.909 } 00:14:16.909 ]' 00:14:16.909 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.909 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:16.909 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.909 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:16.909 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.168 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.168 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.168 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.427 08:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:17.427 08:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:17.994 08:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.994 08:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:17.994 08:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.994 08:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.994 08:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.994 08:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.994 08:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:17.994 08:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.559 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.817 00:14:18.817 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.817 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.817 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.098 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.098 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.098 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.098 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.098 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.098 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.098 { 00:14:19.098 "cntlid": 115, 00:14:19.098 "qid": 0, 00:14:19.098 "state": "enabled", 00:14:19.098 "thread": "nvmf_tgt_poll_group_000", 00:14:19.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:19.098 "listen_address": { 00:14:19.098 "trtype": "TCP", 00:14:19.098 "adrfam": "IPv4", 00:14:19.098 "traddr": "10.0.0.3", 00:14:19.098 "trsvcid": "4420" 00:14:19.098 }, 00:14:19.098 "peer_address": { 00:14:19.098 "trtype": "TCP", 00:14:19.098 "adrfam": "IPv4", 00:14:19.098 "traddr": "10.0.0.1", 00:14:19.098 "trsvcid": "36952" 00:14:19.098 }, 00:14:19.098 "auth": { 00:14:19.098 "state": "completed", 00:14:19.098 "digest": "sha512", 00:14:19.098 "dhgroup": "ffdhe3072" 00:14:19.098 } 00:14:19.098 } 00:14:19.099 ]' 00:14:19.099 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.099 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.099 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.099 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:19.099 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.099 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.099 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.099 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.357 08:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:19.357 08:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:20.293 08:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.293 08:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:20.293 08:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.293 08:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.293 08:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.293 08:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.293 08:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:20.293 08:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.552 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.811 00:14:20.811 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.811 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.811 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.378 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.378 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.378 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.378 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.378 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.378 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.378 { 00:14:21.378 "cntlid": 117, 00:14:21.378 "qid": 0, 00:14:21.378 "state": "enabled", 00:14:21.378 "thread": "nvmf_tgt_poll_group_000", 00:14:21.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:21.378 "listen_address": { 00:14:21.378 "trtype": "TCP", 00:14:21.378 "adrfam": "IPv4", 00:14:21.378 "traddr": "10.0.0.3", 00:14:21.378 "trsvcid": "4420" 00:14:21.378 }, 00:14:21.378 "peer_address": { 00:14:21.378 "trtype": "TCP", 00:14:21.378 "adrfam": "IPv4", 00:14:21.378 "traddr": "10.0.0.1", 00:14:21.378 "trsvcid": "36984" 00:14:21.378 }, 00:14:21.378 "auth": { 00:14:21.378 "state": "completed", 00:14:21.378 "digest": "sha512", 00:14:21.378 "dhgroup": "ffdhe3072" 00:14:21.378 } 00:14:21.378 } 00:14:21.378 ]' 00:14:21.378 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.378 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:21.378 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.378 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:21.378 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.378 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.378 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.378 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.637 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:21.637 08:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.573 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.831 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.831 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:22.831 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.831 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:23.090 00:14:23.090 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.090 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.090 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.348 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.348 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.348 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.348 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.348 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.348 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.348 { 00:14:23.348 "cntlid": 119, 00:14:23.348 "qid": 0, 00:14:23.348 "state": "enabled", 00:14:23.348 "thread": "nvmf_tgt_poll_group_000", 00:14:23.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:23.348 "listen_address": { 00:14:23.348 "trtype": "TCP", 00:14:23.348 "adrfam": "IPv4", 00:14:23.348 "traddr": "10.0.0.3", 00:14:23.348 "trsvcid": "4420" 00:14:23.348 }, 00:14:23.348 "peer_address": { 00:14:23.348 "trtype": "TCP", 00:14:23.348 "adrfam": "IPv4", 00:14:23.348 "traddr": "10.0.0.1", 00:14:23.348 "trsvcid": "37010" 00:14:23.348 }, 00:14:23.348 "auth": { 00:14:23.348 "state": "completed", 00:14:23.348 "digest": "sha512", 00:14:23.348 "dhgroup": "ffdhe3072" 00:14:23.348 } 00:14:23.348 } 00:14:23.349 ]' 00:14:23.349 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.349 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:23.349 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.349 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:23.349 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.638 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.638 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.638 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.897 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:23.897 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:24.464 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.464 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:24.464 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.464 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.464 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.464 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:24.464 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.464 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:24.464 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.031 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.289 00:14:25.289 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.289 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.289 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.548 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.548 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.548 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.548 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.548 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.548 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.548 { 00:14:25.548 "cntlid": 121, 00:14:25.548 "qid": 0, 00:14:25.548 "state": "enabled", 00:14:25.548 "thread": "nvmf_tgt_poll_group_000", 00:14:25.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:25.548 "listen_address": { 00:14:25.548 "trtype": "TCP", 00:14:25.548 "adrfam": "IPv4", 00:14:25.548 "traddr": "10.0.0.3", 00:14:25.548 "trsvcid": "4420" 00:14:25.548 }, 00:14:25.548 "peer_address": { 00:14:25.548 "trtype": "TCP", 00:14:25.548 "adrfam": "IPv4", 00:14:25.548 "traddr": "10.0.0.1", 00:14:25.548 "trsvcid": "37044" 00:14:25.548 }, 00:14:25.548 "auth": { 00:14:25.548 "state": "completed", 00:14:25.548 "digest": "sha512", 00:14:25.548 "dhgroup": "ffdhe4096" 00:14:25.548 } 00:14:25.548 } 00:14:25.548 ]' 00:14:25.548 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.548 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:25.548 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.548 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:25.548 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.807 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.807 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.807 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.066 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:26.066 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:26.633 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.633 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:26.633 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.633 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.633 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.633 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.633 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:26.633 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.892 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.151 00:14:27.410 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.410 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.410 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.670 { 00:14:27.670 "cntlid": 123, 00:14:27.670 "qid": 0, 00:14:27.670 "state": "enabled", 00:14:27.670 "thread": "nvmf_tgt_poll_group_000", 00:14:27.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:27.670 "listen_address": { 00:14:27.670 "trtype": "TCP", 00:14:27.670 "adrfam": "IPv4", 00:14:27.670 "traddr": "10.0.0.3", 00:14:27.670 "trsvcid": "4420" 00:14:27.670 }, 00:14:27.670 "peer_address": { 00:14:27.670 "trtype": "TCP", 00:14:27.670 "adrfam": "IPv4", 00:14:27.670 "traddr": "10.0.0.1", 00:14:27.670 "trsvcid": "49146" 00:14:27.670 }, 00:14:27.670 "auth": { 00:14:27.670 "state": "completed", 00:14:27.670 "digest": "sha512", 00:14:27.670 "dhgroup": "ffdhe4096" 00:14:27.670 } 00:14:27.670 } 00:14:27.670 ]' 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.670 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.238 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:28.238 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:28.849 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.849 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:28.849 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.849 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.849 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.849 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.849 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:28.849 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.109 08:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.677 00:14:29.677 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.677 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.677 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.936 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.936 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.936 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.936 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.936 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.936 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.936 { 00:14:29.936 "cntlid": 125, 00:14:29.936 "qid": 0, 00:14:29.936 "state": "enabled", 00:14:29.936 "thread": "nvmf_tgt_poll_group_000", 00:14:29.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:29.936 "listen_address": { 00:14:29.936 "trtype": "TCP", 00:14:29.936 "adrfam": "IPv4", 00:14:29.936 "traddr": "10.0.0.3", 00:14:29.936 "trsvcid": "4420" 00:14:29.936 }, 00:14:29.936 "peer_address": { 00:14:29.936 "trtype": "TCP", 00:14:29.936 "adrfam": "IPv4", 00:14:29.936 "traddr": "10.0.0.1", 00:14:29.936 "trsvcid": "49170" 00:14:29.936 }, 00:14:29.936 "auth": { 00:14:29.936 "state": "completed", 00:14:29.936 "digest": "sha512", 00:14:29.936 "dhgroup": "ffdhe4096" 00:14:29.936 } 00:14:29.936 } 00:14:29.936 ]' 00:14:29.936 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.936 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.936 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.936 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:29.936 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.196 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.196 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.196 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.454 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:30.454 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:31.021 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.021 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:31.021 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.021 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.021 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.021 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.021 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:31.021 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:31.587 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:31.587 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.587 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:31.587 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:31.587 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:31.587 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.587 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:14:31.587 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.587 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.587 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.587 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:31.588 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.588 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.846 00:14:31.846 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.846 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.846 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.105 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.105 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.105 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.105 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.105 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.105 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.105 { 00:14:32.105 "cntlid": 127, 00:14:32.105 "qid": 0, 00:14:32.105 "state": "enabled", 00:14:32.105 "thread": "nvmf_tgt_poll_group_000", 00:14:32.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:32.105 "listen_address": { 00:14:32.105 "trtype": "TCP", 00:14:32.105 "adrfam": "IPv4", 00:14:32.105 "traddr": "10.0.0.3", 00:14:32.105 "trsvcid": "4420" 00:14:32.105 }, 00:14:32.105 "peer_address": { 00:14:32.105 "trtype": "TCP", 00:14:32.105 "adrfam": "IPv4", 00:14:32.105 "traddr": "10.0.0.1", 00:14:32.105 "trsvcid": "49198" 00:14:32.105 }, 00:14:32.105 "auth": { 00:14:32.105 "state": "completed", 00:14:32.105 "digest": "sha512", 00:14:32.105 "dhgroup": "ffdhe4096" 00:14:32.105 } 00:14:32.105 } 00:14:32.105 ]' 00:14:32.105 08:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.364 08:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:32.364 08:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.364 08:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:32.364 08:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.364 08:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.364 08:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.364 08:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.622 08:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:32.623 08:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:33.576 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.576 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:33.576 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.576 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.576 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.576 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.576 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.576 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:33.576 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.835 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.094 00:14:34.352 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.352 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.352 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.611 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.611 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.611 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.611 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.611 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.611 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.611 { 00:14:34.611 "cntlid": 129, 00:14:34.611 "qid": 0, 00:14:34.611 "state": "enabled", 00:14:34.611 "thread": "nvmf_tgt_poll_group_000", 00:14:34.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:34.611 "listen_address": { 00:14:34.611 "trtype": "TCP", 00:14:34.611 "adrfam": "IPv4", 00:14:34.611 "traddr": "10.0.0.3", 00:14:34.611 "trsvcid": "4420" 00:14:34.611 }, 00:14:34.611 "peer_address": { 00:14:34.611 "trtype": "TCP", 00:14:34.611 "adrfam": "IPv4", 00:14:34.611 "traddr": "10.0.0.1", 00:14:34.611 "trsvcid": "49232" 00:14:34.611 }, 00:14:34.611 "auth": { 00:14:34.611 "state": "completed", 00:14:34.611 "digest": "sha512", 00:14:34.611 "dhgroup": "ffdhe6144" 00:14:34.611 } 00:14:34.612 } 00:14:34.612 ]' 00:14:34.612 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.612 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:34.612 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.612 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.612 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.612 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.612 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.612 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.179 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:35.179 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:35.747 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.747 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:35.747 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.747 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.747 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.747 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.747 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:35.747 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.006 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.574 00:14:36.574 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.574 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.574 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.833 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.833 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.833 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.833 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.833 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.833 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.833 { 00:14:36.833 "cntlid": 131, 00:14:36.833 "qid": 0, 00:14:36.833 "state": "enabled", 00:14:36.833 "thread": "nvmf_tgt_poll_group_000", 00:14:36.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:36.833 "listen_address": { 00:14:36.833 "trtype": "TCP", 00:14:36.833 "adrfam": "IPv4", 00:14:36.833 "traddr": "10.0.0.3", 00:14:36.833 "trsvcid": "4420" 00:14:36.833 }, 00:14:36.833 "peer_address": { 00:14:36.833 "trtype": "TCP", 00:14:36.833 "adrfam": "IPv4", 00:14:36.833 "traddr": "10.0.0.1", 00:14:36.833 "trsvcid": "47354" 00:14:36.833 }, 00:14:36.833 "auth": { 00:14:36.833 "state": "completed", 00:14:36.833 "digest": "sha512", 00:14:36.833 "dhgroup": "ffdhe6144" 00:14:36.833 } 00:14:36.833 } 00:14:36.833 ]' 00:14:36.833 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.833 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:36.833 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.092 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:37.092 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.092 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.092 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.092 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.352 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:37.352 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:38.289 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.289 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:38.289 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.289 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.289 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.289 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.289 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:38.289 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.547 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.130 00:14:39.130 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.130 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.130 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.430 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.430 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.430 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.430 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.430 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.430 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.430 { 00:14:39.430 "cntlid": 133, 00:14:39.430 "qid": 0, 00:14:39.430 "state": "enabled", 00:14:39.430 "thread": "nvmf_tgt_poll_group_000", 00:14:39.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:39.430 "listen_address": { 00:14:39.430 "trtype": "TCP", 00:14:39.430 "adrfam": "IPv4", 00:14:39.430 "traddr": "10.0.0.3", 00:14:39.430 "trsvcid": "4420" 00:14:39.430 }, 00:14:39.430 "peer_address": { 00:14:39.430 "trtype": "TCP", 00:14:39.430 "adrfam": "IPv4", 00:14:39.430 "traddr": "10.0.0.1", 00:14:39.430 "trsvcid": "47388" 00:14:39.430 }, 00:14:39.430 "auth": { 00:14:39.430 "state": "completed", 00:14:39.430 "digest": "sha512", 00:14:39.430 "dhgroup": "ffdhe6144" 00:14:39.430 } 00:14:39.430 } 00:14:39.430 ]' 00:14:39.430 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.430 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:39.430 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.430 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:39.430 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.431 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.431 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.431 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.689 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:39.689 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:40.256 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.256 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:40.256 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.256 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.515 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.515 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.515 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:40.515 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.774 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.342 00:14:41.342 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.342 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.342 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.601 { 00:14:41.601 "cntlid": 135, 00:14:41.601 "qid": 0, 00:14:41.601 "state": "enabled", 00:14:41.601 "thread": "nvmf_tgt_poll_group_000", 00:14:41.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:41.601 "listen_address": { 00:14:41.601 "trtype": "TCP", 00:14:41.601 "adrfam": "IPv4", 00:14:41.601 "traddr": "10.0.0.3", 00:14:41.601 "trsvcid": "4420" 00:14:41.601 }, 00:14:41.601 "peer_address": { 00:14:41.601 "trtype": "TCP", 00:14:41.601 "adrfam": "IPv4", 00:14:41.601 "traddr": "10.0.0.1", 00:14:41.601 "trsvcid": "47410" 00:14:41.601 }, 00:14:41.601 "auth": { 00:14:41.601 "state": "completed", 00:14:41.601 "digest": "sha512", 00:14:41.601 "dhgroup": "ffdhe6144" 00:14:41.601 } 00:14:41.601 } 00:14:41.601 ]' 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.601 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.860 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:41.860 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:42.796 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.797 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.797 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.797 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.055 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.055 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.055 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.055 08:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.623 00:14:43.623 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.623 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.623 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.882 { 00:14:43.882 "cntlid": 137, 00:14:43.882 "qid": 0, 00:14:43.882 "state": "enabled", 00:14:43.882 "thread": "nvmf_tgt_poll_group_000", 00:14:43.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:43.882 "listen_address": { 00:14:43.882 "trtype": "TCP", 00:14:43.882 "adrfam": "IPv4", 00:14:43.882 "traddr": "10.0.0.3", 00:14:43.882 "trsvcid": "4420" 00:14:43.882 }, 00:14:43.882 "peer_address": { 00:14:43.882 "trtype": "TCP", 00:14:43.882 "adrfam": "IPv4", 00:14:43.882 "traddr": "10.0.0.1", 00:14:43.882 "trsvcid": "47446" 00:14:43.882 }, 00:14:43.882 "auth": { 00:14:43.882 "state": "completed", 00:14:43.882 "digest": "sha512", 00:14:43.882 "dhgroup": "ffdhe8192" 00:14:43.882 } 00:14:43.882 } 00:14:43.882 ]' 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.882 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.450 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:44.450 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:45.017 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.017 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:45.017 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.017 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.017 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.017 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.017 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:45.018 08:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.277 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.844 00:14:45.844 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.844 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.844 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.103 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.103 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.103 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.103 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.103 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.103 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.104 { 00:14:46.104 "cntlid": 139, 00:14:46.104 "qid": 0, 00:14:46.104 "state": "enabled", 00:14:46.104 "thread": "nvmf_tgt_poll_group_000", 00:14:46.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:46.104 "listen_address": { 00:14:46.104 "trtype": "TCP", 00:14:46.104 "adrfam": "IPv4", 00:14:46.104 "traddr": "10.0.0.3", 00:14:46.104 "trsvcid": "4420" 00:14:46.104 }, 00:14:46.104 "peer_address": { 00:14:46.104 "trtype": "TCP", 00:14:46.104 "adrfam": "IPv4", 00:14:46.104 "traddr": "10.0.0.1", 00:14:46.104 "trsvcid": "47472" 00:14:46.104 }, 00:14:46.104 "auth": { 00:14:46.104 "state": "completed", 00:14:46.104 "digest": "sha512", 00:14:46.104 "dhgroup": "ffdhe8192" 00:14:46.104 } 00:14:46.104 } 00:14:46.104 ]' 00:14:46.104 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.362 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.362 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.362 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:46.362 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.362 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.362 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.362 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.629 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:46.629 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: --dhchap-ctrl-secret DHHC-1:02:ZjcxNzNmODQxMGRkM2EzNzdiNWIwMjNhZTRlZjY5Zjk2NzYxNGVkYjk4MTYxZDkwGw+nLA==: 00:14:47.568 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.568 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:47.568 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.568 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.568 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.568 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.568 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:47.568 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.827 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.393 00:14:48.393 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.393 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.393 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.651 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.651 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.651 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.651 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.651 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.651 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.651 { 00:14:48.651 "cntlid": 141, 00:14:48.651 "qid": 0, 00:14:48.651 "state": "enabled", 00:14:48.651 "thread": "nvmf_tgt_poll_group_000", 00:14:48.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:48.651 "listen_address": { 00:14:48.651 "trtype": "TCP", 00:14:48.651 "adrfam": "IPv4", 00:14:48.651 "traddr": "10.0.0.3", 00:14:48.651 "trsvcid": "4420" 00:14:48.651 }, 00:14:48.651 "peer_address": { 00:14:48.651 "trtype": "TCP", 00:14:48.651 "adrfam": "IPv4", 00:14:48.651 "traddr": "10.0.0.1", 00:14:48.651 "trsvcid": "53506" 00:14:48.651 }, 00:14:48.651 "auth": { 00:14:48.651 "state": "completed", 00:14:48.651 "digest": "sha512", 00:14:48.651 "dhgroup": "ffdhe8192" 00:14:48.651 } 00:14:48.651 } 00:14:48.651 ]' 00:14:48.651 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.910 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.910 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.910 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:48.910 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.910 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.910 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.910 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.168 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:49.168 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:01:M2I4ZDI5YjRjODhlNzQ0OWI4NzNmOTY4NDRhNTY4MDG25oqo: 00:14:49.748 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.748 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:49.748 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.748 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.748 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.748 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.748 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:49.748 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.315 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.879 00:14:50.879 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.879 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.879 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.137 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.137 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.137 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.137 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.396 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.396 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.396 { 00:14:51.396 "cntlid": 143, 00:14:51.396 "qid": 0, 00:14:51.396 "state": "enabled", 00:14:51.396 "thread": "nvmf_tgt_poll_group_000", 00:14:51.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:51.396 "listen_address": { 00:14:51.396 "trtype": "TCP", 00:14:51.396 "adrfam": "IPv4", 00:14:51.396 "traddr": "10.0.0.3", 00:14:51.396 "trsvcid": "4420" 00:14:51.396 }, 00:14:51.396 "peer_address": { 00:14:51.396 "trtype": "TCP", 00:14:51.396 "adrfam": "IPv4", 00:14:51.396 "traddr": "10.0.0.1", 00:14:51.396 "trsvcid": "53544" 00:14:51.396 }, 00:14:51.396 "auth": { 00:14:51.396 "state": "completed", 00:14:51.396 "digest": "sha512", 00:14:51.396 "dhgroup": "ffdhe8192" 00:14:51.396 } 00:14:51.396 } 00:14:51.396 ]' 00:14:51.396 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.396 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.396 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.396 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.396 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.396 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.396 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.396 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.654 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:51.654 08:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:14:52.590 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.590 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:52.590 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.590 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.590 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.590 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:52.590 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:52.590 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:52.590 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:52.590 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:52.590 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.849 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.415 00:14:53.415 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.415 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.415 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.673 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.673 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.673 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.673 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.673 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.673 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.673 { 00:14:53.673 "cntlid": 145, 00:14:53.673 "qid": 0, 00:14:53.673 "state": "enabled", 00:14:53.673 "thread": "nvmf_tgt_poll_group_000", 00:14:53.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:53.673 "listen_address": { 00:14:53.673 "trtype": "TCP", 00:14:53.673 "adrfam": "IPv4", 00:14:53.673 "traddr": "10.0.0.3", 00:14:53.673 "trsvcid": "4420" 00:14:53.673 }, 00:14:53.673 "peer_address": { 00:14:53.673 "trtype": "TCP", 00:14:53.673 "adrfam": "IPv4", 00:14:53.673 "traddr": "10.0.0.1", 00:14:53.673 "trsvcid": "53572" 00:14:53.673 }, 00:14:53.673 "auth": { 00:14:53.673 "state": "completed", 00:14:53.673 "digest": "sha512", 00:14:53.673 "dhgroup": "ffdhe8192" 00:14:53.673 } 00:14:53.673 } 00:14:53.673 ]' 00:14:53.673 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.673 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:53.673 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.932 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.932 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.932 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.932 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.932 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.190 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:54.190 08:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:00:Yzg1ZGEyMmFiMGI0NzQ2MmE2NmRkNDQ3MGE3ZDhkMWViNWQ1MjNhZGJlY2UzZDY3cc8LyQ==: --dhchap-ctrl-secret DHHC-1:03:NDJjOTg5NWFkZjE3ODg5ZWY0MzZiMmVkYTQ1NzAxZDY4ZmM1MDZkOTY1MDFkZGQ5ODkxZjljODAwMGQ2ZjdmMFGZSyk=: 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.757 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:55.015 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.015 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:55.015 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:55.015 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:55.582 request: 00:14:55.582 { 00:14:55.582 "name": "nvme0", 00:14:55.582 "trtype": "tcp", 00:14:55.582 "traddr": "10.0.0.3", 00:14:55.582 "adrfam": "ipv4", 00:14:55.582 "trsvcid": "4420", 00:14:55.582 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:55.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:55.582 "prchk_reftag": false, 00:14:55.582 "prchk_guard": false, 00:14:55.582 "hdgst": false, 00:14:55.582 "ddgst": false, 00:14:55.582 "dhchap_key": "key2", 00:14:55.582 "allow_unrecognized_csi": false, 00:14:55.582 "method": "bdev_nvme_attach_controller", 00:14:55.582 "req_id": 1 00:14:55.582 } 00:14:55.582 Got JSON-RPC error response 00:14:55.582 response: 00:14:55.582 { 00:14:55.582 "code": -5, 00:14:55.582 "message": "Input/output error" 00:14:55.582 } 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:55.582 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:56.149 request: 00:14:56.149 { 00:14:56.149 "name": "nvme0", 00:14:56.149 "trtype": "tcp", 00:14:56.149 "traddr": "10.0.0.3", 00:14:56.149 "adrfam": "ipv4", 00:14:56.149 "trsvcid": "4420", 00:14:56.149 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:56.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:56.149 "prchk_reftag": false, 00:14:56.149 "prchk_guard": false, 00:14:56.149 "hdgst": false, 00:14:56.149 "ddgst": false, 00:14:56.149 "dhchap_key": "key1", 00:14:56.149 "dhchap_ctrlr_key": "ckey2", 00:14:56.149 "allow_unrecognized_csi": false, 00:14:56.149 "method": "bdev_nvme_attach_controller", 00:14:56.149 "req_id": 1 00:14:56.149 } 00:14:56.149 Got JSON-RPC error response 00:14:56.149 response: 00:14:56.149 { 00:14:56.149 "code": -5, 00:14:56.149 "message": "Input/output error" 00:14:56.149 } 00:14:56.149 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:56.149 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:56.149 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:56.149 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:56.149 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:56.149 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.149 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.149 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.085 request: 00:14:57.085 { 00:14:57.085 "name": "nvme0", 00:14:57.085 "trtype": "tcp", 00:14:57.085 "traddr": "10.0.0.3", 00:14:57.085 "adrfam": "ipv4", 00:14:57.085 "trsvcid": "4420", 00:14:57.085 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:57.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:14:57.085 "prchk_reftag": false, 00:14:57.085 "prchk_guard": false, 00:14:57.085 "hdgst": false, 00:14:57.085 "ddgst": false, 00:14:57.085 "dhchap_key": "key1", 00:14:57.085 "dhchap_ctrlr_key": "ckey1", 00:14:57.085 "allow_unrecognized_csi": false, 00:14:57.085 "method": "bdev_nvme_attach_controller", 00:14:57.085 "req_id": 1 00:14:57.085 } 00:14:57.085 Got JSON-RPC error response 00:14:57.085 response: 00:14:57.085 { 00:14:57.085 "code": -5, 00:14:57.085 "message": "Input/output error" 00:14:57.085 } 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67527 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67527 ']' 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67527 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67527 00:14:57.085 killing process with pid 67527 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67527' 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67527 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67527 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70736 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70736 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70736 ']' 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.085 08:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70736 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70736 ']' 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:58.461 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.719 null0 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kxU 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Z5Z ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Z5Z 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BfY 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.7Np ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Np 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gVy 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.MTh ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MTh 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ZBs 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.719 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.123 nvme0n1 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.123 { 00:15:00.123 "cntlid": 1, 00:15:00.123 "qid": 0, 00:15:00.123 "state": "enabled", 00:15:00.123 "thread": "nvmf_tgt_poll_group_000", 00:15:00.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:15:00.123 "listen_address": { 00:15:00.123 "trtype": "TCP", 00:15:00.123 "adrfam": "IPv4", 00:15:00.123 "traddr": "10.0.0.3", 00:15:00.123 "trsvcid": "4420" 00:15:00.123 }, 00:15:00.123 "peer_address": { 00:15:00.123 "trtype": "TCP", 00:15:00.123 "adrfam": "IPv4", 00:15:00.123 "traddr": "10.0.0.1", 00:15:00.123 "trsvcid": "46108" 00:15:00.123 }, 00:15:00.123 "auth": { 00:15:00.123 "state": "completed", 00:15:00.123 "digest": "sha512", 00:15:00.123 "dhgroup": "ffdhe8192" 00:15:00.123 } 00:15:00.123 } 00:15:00.123 ]' 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.123 08:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.123 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.382 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.382 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.382 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.641 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:15:00.641 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:15:01.208 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.208 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:15:01.208 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.208 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.208 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.208 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key3 00:15:01.208 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.208 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.208 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.208 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:01.208 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:01.466 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:01.466 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:01.466 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:01.466 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:01.466 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.466 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:01.466 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.466 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.466 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.466 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.031 request: 00:15:02.031 { 00:15:02.031 "name": "nvme0", 00:15:02.031 "trtype": "tcp", 00:15:02.031 "traddr": "10.0.0.3", 00:15:02.031 "adrfam": "ipv4", 00:15:02.031 "trsvcid": "4420", 00:15:02.031 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:02.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:15:02.032 "prchk_reftag": false, 00:15:02.032 "prchk_guard": false, 00:15:02.032 "hdgst": false, 00:15:02.032 "ddgst": false, 00:15:02.032 "dhchap_key": "key3", 00:15:02.032 "allow_unrecognized_csi": false, 00:15:02.032 "method": "bdev_nvme_attach_controller", 00:15:02.032 "req_id": 1 00:15:02.032 } 00:15:02.032 Got JSON-RPC error response 00:15:02.032 response: 00:15:02.032 { 00:15:02.032 "code": -5, 00:15:02.032 "message": "Input/output error" 00:15:02.032 } 00:15:02.032 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:02.032 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:02.032 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:02.032 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:02.032 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:02.032 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:02.032 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:02.032 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:02.290 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:02.290 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:02.290 08:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:02.290 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:02.290 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.290 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:02.290 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.290 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:02.290 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.290 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.549 request: 00:15:02.549 { 00:15:02.549 "name": "nvme0", 00:15:02.549 "trtype": "tcp", 00:15:02.549 "traddr": "10.0.0.3", 00:15:02.549 "adrfam": "ipv4", 00:15:02.549 "trsvcid": "4420", 00:15:02.549 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:02.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:15:02.549 "prchk_reftag": false, 00:15:02.549 "prchk_guard": false, 00:15:02.549 "hdgst": false, 00:15:02.549 "ddgst": false, 00:15:02.549 "dhchap_key": "key3", 00:15:02.549 "allow_unrecognized_csi": false, 00:15:02.549 "method": "bdev_nvme_attach_controller", 00:15:02.549 "req_id": 1 00:15:02.549 } 00:15:02.549 Got JSON-RPC error response 00:15:02.549 response: 00:15:02.549 { 00:15:02.549 "code": -5, 00:15:02.549 "message": "Input/output error" 00:15:02.549 } 00:15:02.549 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:02.549 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:02.549 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:02.549 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:02.549 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:02.549 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:02.549 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:02.549 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:02.549 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:02.549 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.808 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:03.383 request: 00:15:03.383 { 00:15:03.383 "name": "nvme0", 00:15:03.383 "trtype": "tcp", 00:15:03.383 "traddr": "10.0.0.3", 00:15:03.383 "adrfam": "ipv4", 00:15:03.383 "trsvcid": "4420", 00:15:03.383 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:03.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:15:03.383 "prchk_reftag": false, 00:15:03.383 "prchk_guard": false, 00:15:03.383 "hdgst": false, 00:15:03.383 "ddgst": false, 00:15:03.383 "dhchap_key": "key0", 00:15:03.383 "dhchap_ctrlr_key": "key1", 00:15:03.383 "allow_unrecognized_csi": false, 00:15:03.383 "method": "bdev_nvme_attach_controller", 00:15:03.383 "req_id": 1 00:15:03.383 } 00:15:03.383 Got JSON-RPC error response 00:15:03.383 response: 00:15:03.383 { 00:15:03.383 "code": -5, 00:15:03.383 "message": "Input/output error" 00:15:03.383 } 00:15:03.383 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:03.383 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:03.383 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:03.383 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:03.383 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:03.383 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:03.383 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:03.640 nvme0n1 00:15:03.640 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:03.640 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:03.640 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.898 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.898 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.898 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.156 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 00:15:04.156 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.156 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.156 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.156 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:04.156 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:04.156 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:05.090 nvme0n1 00:15:05.091 08:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:05.091 08:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.091 08:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:05.348 08:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.348 08:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:05.348 08:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.348 08:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.348 08:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.348 08:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:05.348 08:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:05.348 08:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.914 08:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.914 08:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:15:05.914 08:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid 0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -l 0 --dhchap-secret DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: --dhchap-ctrl-secret DHHC-1:03:MjU4Y2YwYTRkN2I5ZTdjZjJiZGM0MTNlMGJiNzZjODQwYmFjNmQxZTg2NThiODI2YTU4OGUzM2U4ZGM1ZTYxNijmyts=: 00:15:06.481 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:06.481 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:06.481 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:06.481 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:06.481 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:06.481 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:06.481 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:06.481 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.481 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.740 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:06.740 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:06.740 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:06.740 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:06.740 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.740 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:06.740 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.740 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:06.740 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:06.740 08:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:07.310 request: 00:15:07.311 { 00:15:07.311 "name": "nvme0", 00:15:07.311 "trtype": "tcp", 00:15:07.311 "traddr": "10.0.0.3", 00:15:07.311 "adrfam": "ipv4", 00:15:07.311 "trsvcid": "4420", 00:15:07.311 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:07.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb", 00:15:07.311 "prchk_reftag": false, 00:15:07.311 "prchk_guard": false, 00:15:07.311 "hdgst": false, 00:15:07.311 "ddgst": false, 00:15:07.311 "dhchap_key": "key1", 00:15:07.311 "allow_unrecognized_csi": false, 00:15:07.311 "method": "bdev_nvme_attach_controller", 00:15:07.311 "req_id": 1 00:15:07.311 } 00:15:07.311 Got JSON-RPC error response 00:15:07.311 response: 00:15:07.311 { 00:15:07.311 "code": -5, 00:15:07.311 "message": "Input/output error" 00:15:07.311 } 00:15:07.311 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:07.311 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:07.311 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:07.311 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:07.311 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:07.311 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:07.311 08:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:08.687 nvme0n1 00:15:08.687 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:08.687 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:08.687 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.946 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.946 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.946 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.323 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:15:09.323 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.323 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.323 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.323 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:09.323 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:09.323 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:09.582 nvme0n1 00:15:09.582 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:09.582 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:09.582 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.840 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.840 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.840 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: '' 2s 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: ]] 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZGVmYTczMjA3Nzk0OTdjOTQ4MmVlYWFmOTdmMWY3MTE2vFz3: 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:10.099 08:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: 2s 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:12.629 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:12.630 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: 00:15:12.630 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:12.630 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:12.630 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:12.630 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: ]] 00:15:12.630 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDU2ZTFmNDQ5NmVlZmE1ZmU2ZGE1MDBkYmVjOGRhMjU0ODgzY2Y2NGFiZWY3NjE3JV7mHg==: 00:15:12.630 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:12.630 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:14.532 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:14.532 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:14.532 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:14.532 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:14.532 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:14.532 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:14.532 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:14.532 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.532 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:14.532 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.532 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.532 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.532 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:14.532 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:14.532 08:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:15.468 nvme0n1 00:15:15.468 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:15.468 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.468 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.468 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.468 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:15.468 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:16.036 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:16.036 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.036 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:16.294 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.294 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:15:16.294 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.294 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.294 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.294 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:16.294 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:16.553 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:16.553 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:16.553 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.812 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.812 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:16.812 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.812 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.813 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.813 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:16.813 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:16.813 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:16.813 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:16.813 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.813 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:16.813 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.813 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:16.813 08:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:17.381 request: 00:15:17.381 { 00:15:17.381 "name": "nvme0", 00:15:17.381 "dhchap_key": "key1", 00:15:17.381 "dhchap_ctrlr_key": "key3", 00:15:17.381 "method": "bdev_nvme_set_keys", 00:15:17.381 "req_id": 1 00:15:17.381 } 00:15:17.381 Got JSON-RPC error response 00:15:17.381 response: 00:15:17.381 { 00:15:17.381 "code": -13, 00:15:17.381 "message": "Permission denied" 00:15:17.381 } 00:15:17.381 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:17.381 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.381 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.381 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.381 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:17.381 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:17.381 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.640 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:17.640 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:19.016 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:19.016 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:19.016 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.016 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:19.016 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:19.016 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.016 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.016 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.016 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:19.016 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:19.016 08:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:19.953 nvme0n1 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:19.953 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:20.888 request: 00:15:20.888 { 00:15:20.888 "name": "nvme0", 00:15:20.888 "dhchap_key": "key2", 00:15:20.888 "dhchap_ctrlr_key": "key0", 00:15:20.888 "method": "bdev_nvme_set_keys", 00:15:20.888 "req_id": 1 00:15:20.888 } 00:15:20.888 Got JSON-RPC error response 00:15:20.888 response: 00:15:20.888 { 00:15:20.888 "code": -13, 00:15:20.888 "message": "Permission denied" 00:15:20.888 } 00:15:20.888 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:20.888 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.889 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.889 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.889 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:20.889 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.889 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:21.147 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:21.147 08:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:22.083 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:22.083 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.083 08:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67551 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67551 ']' 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67551 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67551 00:15:22.343 killing process with pid 67551 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67551' 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67551 00:15:22.343 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67551 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:22.930 rmmod nvme_tcp 00:15:22.930 rmmod nvme_fabrics 00:15:22.930 rmmod nvme_keyring 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70736 ']' 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70736 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70736 ']' 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70736 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.930 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70736 00:15:23.216 killing process with pid 70736 00:15:23.216 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.216 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.216 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70736' 00:15:23.216 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70736 00:15:23.216 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70736 00:15:23.216 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:23.216 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:23.216 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:23.216 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:23.216 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:23.216 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:23.216 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:23.216 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:23.216 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:23.216 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.kxU /tmp/spdk.key-sha256.BfY /tmp/spdk.key-sha384.gVy /tmp/spdk.key-sha512.ZBs /tmp/spdk.key-sha512.Z5Z /tmp/spdk.key-sha384.7Np /tmp/spdk.key-sha256.MTh '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:23.475 00:15:23.475 real 3m26.731s 00:15:23.475 user 8m15.039s 00:15:23.475 sys 0m32.483s 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.475 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.475 ************************************ 00:15:23.475 END TEST nvmf_auth_target 00:15:23.475 ************************************ 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.735 ************************************ 00:15:23.735 START TEST nvmf_bdevio_no_huge 00:15:23.735 ************************************ 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:23.735 * Looking for test storage... 00:15:23.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.735 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:23.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.736 --rc genhtml_branch_coverage=1 00:15:23.736 --rc genhtml_function_coverage=1 00:15:23.736 --rc genhtml_legend=1 00:15:23.736 --rc geninfo_all_blocks=1 00:15:23.736 --rc geninfo_unexecuted_blocks=1 00:15:23.736 00:15:23.736 ' 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:23.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.736 --rc genhtml_branch_coverage=1 00:15:23.736 --rc genhtml_function_coverage=1 00:15:23.736 --rc genhtml_legend=1 00:15:23.736 --rc geninfo_all_blocks=1 00:15:23.736 --rc geninfo_unexecuted_blocks=1 00:15:23.736 00:15:23.736 ' 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:23.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.736 --rc genhtml_branch_coverage=1 00:15:23.736 --rc genhtml_function_coverage=1 00:15:23.736 --rc genhtml_legend=1 00:15:23.736 --rc geninfo_all_blocks=1 00:15:23.736 --rc geninfo_unexecuted_blocks=1 00:15:23.736 00:15:23.736 ' 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:23.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.736 --rc genhtml_branch_coverage=1 00:15:23.736 --rc genhtml_function_coverage=1 00:15:23.736 --rc genhtml_legend=1 00:15:23.736 --rc geninfo_all_blocks=1 00:15:23.736 --rc geninfo_unexecuted_blocks=1 00:15:23.736 00:15:23.736 ' 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.736 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:23.736 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:23.737 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:23.996 Cannot find device "nvmf_init_br" 00:15:23.996 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:23.996 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:23.996 Cannot find device "nvmf_init_br2" 00:15:23.996 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:23.996 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:23.996 Cannot find device "nvmf_tgt_br" 00:15:23.996 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:23.996 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.996 Cannot find device "nvmf_tgt_br2" 00:15:23.996 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:23.996 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:23.996 Cannot find device "nvmf_init_br" 00:15:23.996 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:23.996 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:23.997 Cannot find device "nvmf_init_br2" 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:23.997 Cannot find device "nvmf_tgt_br" 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:23.997 Cannot find device "nvmf_tgt_br2" 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:23.997 Cannot find device "nvmf_br" 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:23.997 Cannot find device "nvmf_init_if" 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:23.997 Cannot find device "nvmf_init_if2" 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:23.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:23.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:23.997 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:24.257 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:24.257 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.131 ms 00:15:24.257 00:15:24.257 --- 10.0.0.3 ping statistics --- 00:15:24.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.257 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:24.257 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:24.257 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:15:24.257 00:15:24.257 --- 10.0.0.4 ping statistics --- 00:15:24.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.257 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:24.257 08:48:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:24.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:24.257 00:15:24.257 --- 10.0.0.1 ping statistics --- 00:15:24.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.257 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:24.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:24.257 00:15:24.257 --- 10.0.0.2 ping statistics --- 00:15:24.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.257 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71402 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71402 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71402 ']' 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:24.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.257 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:24.257 [2024-11-20 08:48:55.099008] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:24.257 [2024-11-20 08:48:55.099137] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:24.516 [2024-11-20 08:48:55.268268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:24.516 [2024-11-20 08:48:55.365526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.516 [2024-11-20 08:48:55.365636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.516 [2024-11-20 08:48:55.365670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.516 [2024-11-20 08:48:55.365682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.516 [2024-11-20 08:48:55.365693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.516 [2024-11-20 08:48:55.366516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:24.516 [2024-11-20 08:48:55.367477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:24.516 [2024-11-20 08:48:55.367611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:24.516 [2024-11-20 08:48:55.367633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.516 [2024-11-20 08:48:55.374163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.453 [2024-11-20 08:48:56.219092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.453 Malloc0 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.453 [2024-11-20 08:48:56.260210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:25.453 { 00:15:25.453 "params": { 00:15:25.453 "name": "Nvme$subsystem", 00:15:25.453 "trtype": "$TEST_TRANSPORT", 00:15:25.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.453 "adrfam": "ipv4", 00:15:25.453 "trsvcid": "$NVMF_PORT", 00:15:25.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.453 "hdgst": ${hdgst:-false}, 00:15:25.453 "ddgst": ${ddgst:-false} 00:15:25.453 }, 00:15:25.453 "method": "bdev_nvme_attach_controller" 00:15:25.453 } 00:15:25.453 EOF 00:15:25.453 )") 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:25.453 08:48:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:25.453 "params": { 00:15:25.453 "name": "Nvme1", 00:15:25.453 "trtype": "tcp", 00:15:25.453 "traddr": "10.0.0.3", 00:15:25.453 "adrfam": "ipv4", 00:15:25.453 "trsvcid": "4420", 00:15:25.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.453 "hdgst": false, 00:15:25.453 "ddgst": false 00:15:25.453 }, 00:15:25.453 "method": "bdev_nvme_attach_controller" 00:15:25.453 }' 00:15:25.453 [2024-11-20 08:48:56.322102] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:25.453 [2024-11-20 08:48:56.322210] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71438 ] 00:15:25.712 [2024-11-20 08:48:56.482900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.712 [2024-11-20 08:48:56.558390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.712 [2024-11-20 08:48:56.558526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.712 [2024-11-20 08:48:56.558531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.712 [2024-11-20 08:48:56.571877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.971 I/O targets: 00:15:25.971 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:25.971 00:15:25.971 00:15:25.971 CUnit - A unit testing framework for C - Version 2.1-3 00:15:25.971 http://cunit.sourceforge.net/ 00:15:25.971 00:15:25.971 00:15:25.971 Suite: bdevio tests on: Nvme1n1 00:15:25.971 Test: blockdev write read block ...passed 00:15:25.971 Test: blockdev write zeroes read block ...passed 00:15:25.971 Test: blockdev write zeroes read no split ...passed 00:15:25.971 Test: blockdev write zeroes read split ...passed 00:15:25.971 Test: blockdev write zeroes read split partial ...passed 00:15:25.971 Test: blockdev reset ...[2024-11-20 08:48:56.816378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:25.971 [2024-11-20 08:48:56.816537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1110310 (9): Bad file descriptor 00:15:25.971 [2024-11-20 08:48:56.832617] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:25.971 passed 00:15:25.971 Test: blockdev write read 8 blocks ...passed 00:15:25.971 Test: blockdev write read size > 128k ...passed 00:15:25.971 Test: blockdev write read invalid size ...passed 00:15:25.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:25.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:25.971 Test: blockdev write read max offset ...passed 00:15:25.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:25.971 Test: blockdev writev readv 8 blocks ...passed 00:15:25.971 Test: blockdev writev readv 30 x 1block ...passed 00:15:25.971 Test: blockdev writev readv block ...passed 00:15:25.971 Test: blockdev writev readv size > 128k ...passed 00:15:25.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:25.971 Test: blockdev comparev and writev ...[2024-11-20 08:48:56.841997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.971 [2024-11-20 08:48:56.842056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:25.971 [2024-11-20 08:48:56.842086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.971 [2024-11-20 08:48:56.842104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:25.971 [2024-11-20 08:48:56.842672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.971 [2024-11-20 08:48:56.842716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:25.971 [2024-11-20 08:48:56.842752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.971 [2024-11-20 08:48:56.842770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:25.971 [2024-11-20 08:48:56.843270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.971 [2024-11-20 08:48:56.843315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:25.971 [2024-11-20 08:48:56.843341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.971 [2024-11-20 08:48:56.843359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:25.971 [2024-11-20 08:48:56.843842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.971 [2024-11-20 08:48:56.843887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:25.971 [2024-11-20 08:48:56.843915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:25.971 [2024-11-20 08:48:56.843932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:25.971 passed 00:15:25.971 Test: blockdev nvme passthru rw ...passed 00:15:25.971 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:48:56.845056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:25.971 [2024-11-20 08:48:56.845105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:25.971 [2024-11-20 08:48:56.845249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:25.971 [2024-11-20 08:48:56.845279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:25.971 [2024-11-20 08:48:56.845396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:25.971 [2024-11-20 08:48:56.845428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:25.971 [2024-11-20 08:48:56.845549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:25.971 [2024-11-20 08:48:56.845589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:25.971 passed 00:15:25.971 Test: blockdev nvme admin passthru ...passed 00:15:25.971 Test: blockdev copy ...passed 00:15:25.971 00:15:25.971 Run Summary: Type Total Ran Passed Failed Inactive 00:15:25.971 suites 1 1 n/a 0 0 00:15:25.971 tests 23 23 23 0 0 00:15:25.971 asserts 152 152 152 0 n/a 00:15:25.971 00:15:25.971 Elapsed time = 0.164 seconds 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.539 rmmod nvme_tcp 00:15:26.539 rmmod nvme_fabrics 00:15:26.539 rmmod nvme_keyring 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71402 ']' 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71402 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71402 ']' 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71402 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71402 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:26.539 killing process with pid 71402 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71402' 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71402 00:15:26.539 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71402 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:27.106 08:48:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:27.106 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:27.106 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:27.365 00:15:27.365 real 0m3.665s 00:15:27.365 user 0m11.300s 00:15:27.365 sys 0m1.551s 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:27.365 ************************************ 00:15:27.365 END TEST nvmf_bdevio_no_huge 00:15:27.365 ************************************ 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.365 ************************************ 00:15:27.365 START TEST nvmf_tls 00:15:27.365 ************************************ 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:27.365 * Looking for test storage... 00:15:27.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:27.365 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:15:27.366 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:27.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.626 --rc genhtml_branch_coverage=1 00:15:27.626 --rc genhtml_function_coverage=1 00:15:27.626 --rc genhtml_legend=1 00:15:27.626 --rc geninfo_all_blocks=1 00:15:27.626 --rc geninfo_unexecuted_blocks=1 00:15:27.626 00:15:27.626 ' 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:27.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.626 --rc genhtml_branch_coverage=1 00:15:27.626 --rc genhtml_function_coverage=1 00:15:27.626 --rc genhtml_legend=1 00:15:27.626 --rc geninfo_all_blocks=1 00:15:27.626 --rc geninfo_unexecuted_blocks=1 00:15:27.626 00:15:27.626 ' 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:27.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.626 --rc genhtml_branch_coverage=1 00:15:27.626 --rc genhtml_function_coverage=1 00:15:27.626 --rc genhtml_legend=1 00:15:27.626 --rc geninfo_all_blocks=1 00:15:27.626 --rc geninfo_unexecuted_blocks=1 00:15:27.626 00:15:27.626 ' 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:27.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.626 --rc genhtml_branch_coverage=1 00:15:27.626 --rc genhtml_function_coverage=1 00:15:27.626 --rc genhtml_legend=1 00:15:27.626 --rc geninfo_all_blocks=1 00:15:27.626 --rc geninfo_unexecuted_blocks=1 00:15:27.626 00:15:27.626 ' 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.626 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.627 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:27.627 Cannot find device "nvmf_init_br" 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:27.627 Cannot find device "nvmf_init_br2" 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:27.627 Cannot find device "nvmf_tgt_br" 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.627 Cannot find device "nvmf_tgt_br2" 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:27.627 Cannot find device "nvmf_init_br" 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:27.627 Cannot find device "nvmf_init_br2" 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:27.627 Cannot find device "nvmf_tgt_br" 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:27.627 Cannot find device "nvmf_tgt_br2" 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:27.627 Cannot find device "nvmf_br" 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:27.627 Cannot find device "nvmf_init_if" 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:27.627 Cannot find device "nvmf_init_if2" 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.627 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.628 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:27.628 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.628 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.628 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:27.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:27.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:15:27.887 00:15:27.887 --- 10.0.0.3 ping statistics --- 00:15:27.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.887 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:27.887 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:27.887 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:27.887 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:27.887 00:15:27.887 --- 10.0.0.4 ping statistics --- 00:15:27.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.888 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:27.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:27.888 00:15:27.888 --- 10.0.0.1 ping statistics --- 00:15:27.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.888 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:27.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:27.888 00:15:27.888 --- 10.0.0.2 ping statistics --- 00:15:27.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.888 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71677 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71677 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71677 ']' 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.888 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.146 [2024-11-20 08:48:58.812904] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:28.147 [2024-11-20 08:48:58.813767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.147 [2024-11-20 08:48:58.965873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.147 [2024-11-20 08:48:59.044093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.147 [2024-11-20 08:48:59.044443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.147 [2024-11-20 08:48:59.044640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.147 [2024-11-20 08:48:59.044776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.147 [2024-11-20 08:48:59.044851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.147 [2024-11-20 08:48:59.045429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.107 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.107 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:29.107 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:29.107 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:29.107 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.107 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.107 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:29.107 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:29.366 true 00:15:29.366 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:29.366 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:29.624 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:29.624 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:29.624 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:29.882 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:29.882 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:30.142 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:30.142 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:30.142 08:49:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:30.402 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:30.402 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:30.661 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:30.661 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:30.661 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:30.661 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:31.229 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:31.229 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:31.229 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:31.230 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:31.230 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:31.797 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:31.797 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:31.797 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:31.797 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:31.797 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:32.363 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:32.363 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:32.363 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:32.363 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:32.363 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:32.363 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:32.363 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:32.363 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:32.363 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.MFkTJxkMOM 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.l8dm8lbt1J 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.MFkTJxkMOM 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.l8dm8lbt1J 00:15:32.363 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:32.622 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:32.881 [2024-11-20 08:49:03.701979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:32.881 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.MFkTJxkMOM 00:15:32.881 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MFkTJxkMOM 00:15:32.881 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:33.140 [2024-11-20 08:49:04.024322] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.140 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:33.710 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:33.710 [2024-11-20 08:49:04.596472] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:33.710 [2024-11-20 08:49:04.596795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:33.710 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:34.281 malloc0 00:15:34.281 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:34.281 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MFkTJxkMOM 00:15:34.539 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:35.106 08:49:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MFkTJxkMOM 00:15:45.075 Initializing NVMe Controllers 00:15:45.075 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:45.075 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:45.075 Initialization complete. Launching workers. 00:15:45.075 ======================================================== 00:15:45.075 Latency(us) 00:15:45.075 Device Information : IOPS MiB/s Average min max 00:15:45.075 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9383.90 36.66 6821.89 978.71 8858.67 00:15:45.075 ======================================================== 00:15:45.075 Total : 9383.90 36.66 6821.89 978.71 8858.67 00:15:45.075 00:15:45.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFkTJxkMOM 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MFkTJxkMOM 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71921 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71921 /var/tmp/bdevperf.sock 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71921 ']' 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.075 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:45.075 [2024-11-20 08:49:15.983134] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:45.075 [2024-11-20 08:49:15.983297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71921 ] 00:15:45.333 [2024-11-20 08:49:16.147100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.333 [2024-11-20 08:49:16.230227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.591 [2024-11-20 08:49:16.308530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:46.235 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.235 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:46.235 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MFkTJxkMOM 00:15:46.494 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:46.751 [2024-11-20 08:49:17.541572] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:46.751 TLSTESTn1 00:15:46.751 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:47.009 Running I/O for 10 seconds... 00:15:48.892 3766.00 IOPS, 14.71 MiB/s [2024-11-20T08:49:21.183Z] 3973.00 IOPS, 15.52 MiB/s [2024-11-20T08:49:22.128Z] 3986.00 IOPS, 15.57 MiB/s [2024-11-20T08:49:23.091Z] 4029.75 IOPS, 15.74 MiB/s [2024-11-20T08:49:24.026Z] 4034.60 IOPS, 15.76 MiB/s [2024-11-20T08:49:24.964Z] 4045.00 IOPS, 15.80 MiB/s [2024-11-20T08:49:25.897Z] 4051.57 IOPS, 15.83 MiB/s [2024-11-20T08:49:26.833Z] 4055.12 IOPS, 15.84 MiB/s [2024-11-20T08:49:28.211Z] 4059.11 IOPS, 15.86 MiB/s [2024-11-20T08:49:28.211Z] 4054.40 IOPS, 15.84 MiB/s 00:15:57.296 Latency(us) 00:15:57.296 [2024-11-20T08:49:28.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.296 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:57.296 Verification LBA range: start 0x0 length 0x2000 00:15:57.296 TLSTESTn1 : 10.02 4059.92 15.86 0.00 0.00 31467.06 6613.18 34793.66 00:15:57.296 [2024-11-20T08:49:28.211Z] =================================================================================================================== 00:15:57.296 [2024-11-20T08:49:28.211Z] Total : 4059.92 15.86 0.00 0.00 31467.06 6613.18 34793.66 00:15:57.296 { 00:15:57.296 "results": [ 00:15:57.296 { 00:15:57.296 "job": "TLSTESTn1", 00:15:57.296 "core_mask": "0x4", 00:15:57.297 "workload": "verify", 00:15:57.297 "status": "finished", 00:15:57.297 "verify_range": { 00:15:57.297 "start": 0, 00:15:57.297 "length": 8192 00:15:57.297 }, 00:15:57.297 "queue_depth": 128, 00:15:57.297 "io_size": 4096, 00:15:57.297 "runtime": 10.01769, 00:15:57.297 "iops": 4059.918005049068, 00:15:57.297 "mibps": 15.859054707222922, 00:15:57.297 "io_failed": 0, 00:15:57.297 "io_timeout": 0, 00:15:57.297 "avg_latency_us": 31467.064673734472, 00:15:57.297 "min_latency_us": 6613.178181818182, 00:15:57.297 "max_latency_us": 34793.65818181818 00:15:57.297 } 00:15:57.297 ], 00:15:57.297 "core_count": 1 00:15:57.297 } 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71921 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71921 ']' 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71921 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71921 00:15:57.297 killing process with pid 71921 00:15:57.297 Received shutdown signal, test time was about 10.000000 seconds 00:15:57.297 00:15:57.297 Latency(us) 00:15:57.297 [2024-11-20T08:49:28.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.297 [2024-11-20T08:49:28.212Z] =================================================================================================================== 00:15:57.297 [2024-11-20T08:49:28.212Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71921' 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71921 00:15:57.297 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71921 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l8dm8lbt1J 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l8dm8lbt1J 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l8dm8lbt1J 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.l8dm8lbt1J 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72061 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72061 /var/tmp/bdevperf.sock 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72061 ']' 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:57.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.297 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.297 [2024-11-20 08:49:28.174685] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:57.297 [2024-11-20 08:49:28.175027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72061 ] 00:15:57.555 [2024-11-20 08:49:28.321890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.555 [2024-11-20 08:49:28.383692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.555 [2024-11-20 08:49:28.456744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.492 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.492 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:58.492 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.l8dm8lbt1J 00:15:58.751 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:59.011 [2024-11-20 08:49:29.715181] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:59.011 [2024-11-20 08:49:29.722435] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:59.011 [2024-11-20 08:49:29.723164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c1fb0 (107): Transport endpoint is not connected 00:15:59.011 [2024-11-20 08:49:29.724154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c1fb0 (9): Bad file descriptor 00:15:59.011 [2024-11-20 08:49:29.725164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:59.011 [2024-11-20 08:49:29.725188] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:59.011 [2024-11-20 08:49:29.725215] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:59.011 [2024-11-20 08:49:29.725231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:59.011 request: 00:15:59.011 { 00:15:59.011 "name": "TLSTEST", 00:15:59.011 "trtype": "tcp", 00:15:59.011 "traddr": "10.0.0.3", 00:15:59.011 "adrfam": "ipv4", 00:15:59.011 "trsvcid": "4420", 00:15:59.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.011 "prchk_reftag": false, 00:15:59.011 "prchk_guard": false, 00:15:59.011 "hdgst": false, 00:15:59.011 "ddgst": false, 00:15:59.011 "psk": "key0", 00:15:59.011 "allow_unrecognized_csi": false, 00:15:59.011 "method": "bdev_nvme_attach_controller", 00:15:59.011 "req_id": 1 00:15:59.011 } 00:15:59.011 Got JSON-RPC error response 00:15:59.011 response: 00:15:59.011 { 00:15:59.011 "code": -5, 00:15:59.011 "message": "Input/output error" 00:15:59.011 } 00:15:59.011 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72061 00:15:59.011 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72061 ']' 00:15:59.011 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72061 00:15:59.011 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:59.011 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.011 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72061 00:15:59.011 killing process with pid 72061 00:15:59.011 Received shutdown signal, test time was about 10.000000 seconds 00:15:59.011 00:15:59.011 Latency(us) 00:15:59.011 [2024-11-20T08:49:29.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.011 [2024-11-20T08:49:29.926Z] =================================================================================================================== 00:15:59.011 [2024-11-20T08:49:29.926Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:59.011 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:59.011 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:59.011 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72061' 00:15:59.011 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72061 00:15:59.011 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72061 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MFkTJxkMOM 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MFkTJxkMOM 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MFkTJxkMOM 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MFkTJxkMOM 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72090 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72090 /var/tmp/bdevperf.sock 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72090 ']' 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:59.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.270 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.270 [2024-11-20 08:49:30.092045] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:59.270 [2024-11-20 08:49:30.092156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72090 ] 00:15:59.529 [2024-11-20 08:49:30.236405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.529 [2024-11-20 08:49:30.306837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.529 [2024-11-20 08:49:30.377459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.788 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.788 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:59.788 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MFkTJxkMOM 00:16:00.046 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:00.306 [2024-11-20 08:49:30.982715] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:00.306 [2024-11-20 08:49:30.992484] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:00.306 [2024-11-20 08:49:30.992556] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:00.306 [2024-11-20 08:49:30.992612] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:00.306 [2024-11-20 08:49:30.992935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bdfb0 (107): Transport endpoint is not connected 00:16:00.306 [2024-11-20 08:49:30.993926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bdfb0 (9): Bad file descriptor 00:16:00.306 [2024-11-20 08:49:30.994923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:00.306 [2024-11-20 08:49:30.994948] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:00.306 [2024-11-20 08:49:30.994961] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:00.306 [2024-11-20 08:49:30.994981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:00.306 request: 00:16:00.306 { 00:16:00.306 "name": "TLSTEST", 00:16:00.306 "trtype": "tcp", 00:16:00.306 "traddr": "10.0.0.3", 00:16:00.306 "adrfam": "ipv4", 00:16:00.306 "trsvcid": "4420", 00:16:00.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.306 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:00.306 "prchk_reftag": false, 00:16:00.306 "prchk_guard": false, 00:16:00.306 "hdgst": false, 00:16:00.306 "ddgst": false, 00:16:00.306 "psk": "key0", 00:16:00.306 "allow_unrecognized_csi": false, 00:16:00.306 "method": "bdev_nvme_attach_controller", 00:16:00.306 "req_id": 1 00:16:00.306 } 00:16:00.306 Got JSON-RPC error response 00:16:00.306 response: 00:16:00.306 { 00:16:00.306 "code": -5, 00:16:00.306 "message": "Input/output error" 00:16:00.306 } 00:16:00.306 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72090 00:16:00.307 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72090 ']' 00:16:00.307 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72090 00:16:00.307 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:00.307 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.307 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72090 00:16:00.307 killing process with pid 72090 00:16:00.307 Received shutdown signal, test time was about 10.000000 seconds 00:16:00.307 00:16:00.307 Latency(us) 00:16:00.307 [2024-11-20T08:49:31.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.307 [2024-11-20T08:49:31.222Z] =================================================================================================================== 00:16:00.307 [2024-11-20T08:49:31.222Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:00.307 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:00.307 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:00.307 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72090' 00:16:00.307 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72090 00:16:00.307 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72090 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFkTJxkMOM 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFkTJxkMOM 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFkTJxkMOM 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MFkTJxkMOM 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72111 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72111 /var/tmp/bdevperf.sock 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72111 ']' 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.566 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.566 [2024-11-20 08:49:31.363989] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:00.566 [2024-11-20 08:49:31.364092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72111 ] 00:16:00.825 [2024-11-20 08:49:31.511113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.825 [2024-11-20 08:49:31.584380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.825 [2024-11-20 08:49:31.658112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:00.825 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.825 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:00.825 08:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MFkTJxkMOM 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:01.393 [2024-11-20 08:49:32.249413] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:01.393 [2024-11-20 08:49:32.254957] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:01.393 [2024-11-20 08:49:32.255183] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:01.393 [2024-11-20 08:49:32.255495] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:01.393 [2024-11-20 08:49:32.255634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x718fb0 (107): Transport endpoint is not connected 00:16:01.393 [2024-11-20 08:49:32.256625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x718fb0 (9): Bad file descriptor 00:16:01.393 [2024-11-20 08:49:32.257622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:01.393 [2024-11-20 08:49:32.257787] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:01.393 [2024-11-20 08:49:32.257911] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:01.393 [2024-11-20 08:49:32.258082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:01.393 request: 00:16:01.393 { 00:16:01.393 "name": "TLSTEST", 00:16:01.393 "trtype": "tcp", 00:16:01.393 "traddr": "10.0.0.3", 00:16:01.393 "adrfam": "ipv4", 00:16:01.393 "trsvcid": "4420", 00:16:01.393 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:01.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:01.393 "prchk_reftag": false, 00:16:01.393 "prchk_guard": false, 00:16:01.393 "hdgst": false, 00:16:01.393 "ddgst": false, 00:16:01.393 "psk": "key0", 00:16:01.393 "allow_unrecognized_csi": false, 00:16:01.393 "method": "bdev_nvme_attach_controller", 00:16:01.393 "req_id": 1 00:16:01.393 } 00:16:01.393 Got JSON-RPC error response 00:16:01.393 response: 00:16:01.393 { 00:16:01.393 "code": -5, 00:16:01.393 "message": "Input/output error" 00:16:01.393 } 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72111 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72111 ']' 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72111 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72111 00:16:01.393 killing process with pid 72111 00:16:01.393 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.393 00:16:01.393 Latency(us) 00:16:01.393 [2024-11-20T08:49:32.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.393 [2024-11-20T08:49:32.308Z] =================================================================================================================== 00:16:01.393 [2024-11-20T08:49:32.308Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72111' 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72111 00:16:01.393 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72111 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:01.652 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72141 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72141 /var/tmp/bdevperf.sock 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72141 ']' 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.910 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:01.910 [2024-11-20 08:49:32.619618] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:01.910 [2024-11-20 08:49:32.619717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72141 ] 00:16:01.910 [2024-11-20 08:49:32.759954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.910 [2024-11-20 08:49:32.822482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.168 [2024-11-20 08:49:32.895854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:02.168 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.168 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:02.168 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:02.427 [2024-11-20 08:49:33.253837] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:02.427 [2024-11-20 08:49:33.253901] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:02.427 request: 00:16:02.427 { 00:16:02.427 "name": "key0", 00:16:02.427 "path": "", 00:16:02.427 "method": "keyring_file_add_key", 00:16:02.427 "req_id": 1 00:16:02.427 } 00:16:02.427 Got JSON-RPC error response 00:16:02.427 response: 00:16:02.427 { 00:16:02.427 "code": -1, 00:16:02.427 "message": "Operation not permitted" 00:16:02.427 } 00:16:02.427 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:02.687 [2024-11-20 08:49:33.498043] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:02.687 [2024-11-20 08:49:33.498148] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:02.687 request: 00:16:02.687 { 00:16:02.687 "name": "TLSTEST", 00:16:02.687 "trtype": "tcp", 00:16:02.687 "traddr": "10.0.0.3", 00:16:02.687 "adrfam": "ipv4", 00:16:02.687 "trsvcid": "4420", 00:16:02.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:02.687 "prchk_reftag": false, 00:16:02.687 "prchk_guard": false, 00:16:02.687 "hdgst": false, 00:16:02.687 "ddgst": false, 00:16:02.687 "psk": "key0", 00:16:02.687 "allow_unrecognized_csi": false, 00:16:02.687 "method": "bdev_nvme_attach_controller", 00:16:02.687 "req_id": 1 00:16:02.687 } 00:16:02.687 Got JSON-RPC error response 00:16:02.687 response: 00:16:02.687 { 00:16:02.687 "code": -126, 00:16:02.687 "message": "Required key not available" 00:16:02.687 } 00:16:02.687 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72141 00:16:02.687 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72141 ']' 00:16:02.687 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72141 00:16:02.687 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:02.687 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.687 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72141 00:16:02.687 killing process with pid 72141 00:16:02.687 Received shutdown signal, test time was about 10.000000 seconds 00:16:02.687 00:16:02.687 Latency(us) 00:16:02.687 [2024-11-20T08:49:33.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.687 [2024-11-20T08:49:33.602Z] =================================================================================================================== 00:16:02.687 [2024-11-20T08:49:33.602Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:02.687 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:02.687 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:02.687 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72141' 00:16:02.687 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72141 00:16:02.687 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72141 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71677 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71677 ']' 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71677 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71677 00:16:02.957 killing process with pid 71677 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71677' 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71677 00:16:02.957 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71677 00:16:03.261 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:03.261 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:03.261 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:03.261 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:03.261 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:03.261 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:03.261 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:03.261 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:03.261 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:03.261 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.YtGfagDfoA 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.YtGfagDfoA 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72174 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72174 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72174 ']' 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.262 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.521 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.521 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.521 [2024-11-20 08:49:34.232486] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:03.521 [2024-11-20 08:49:34.232619] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.521 [2024-11-20 08:49:34.379674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.779 [2024-11-20 08:49:34.441651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.779 [2024-11-20 08:49:34.441755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.779 [2024-11-20 08:49:34.441767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.779 [2024-11-20 08:49:34.441776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.779 [2024-11-20 08:49:34.441783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.779 [2024-11-20 08:49:34.442226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.779 [2024-11-20 08:49:34.513982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:03.779 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.779 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:03.780 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:03.780 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:03.780 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.780 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.780 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.YtGfagDfoA 00:16:03.780 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YtGfagDfoA 00:16:03.780 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:04.038 [2024-11-20 08:49:34.877810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.038 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:04.296 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:04.555 [2024-11-20 08:49:35.461958] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:04.555 [2024-11-20 08:49:35.462263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:04.814 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:04.814 malloc0 00:16:05.073 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:05.333 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YtGfagDfoA 00:16:05.592 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YtGfagDfoA 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YtGfagDfoA 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72222 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72222 /var/tmp/bdevperf.sock 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72222 ']' 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.851 08:49:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.851 [2024-11-20 08:49:36.618425] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:05.851 [2024-11-20 08:49:36.618544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72222 ] 00:16:05.851 [2024-11-20 08:49:36.763001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.110 [2024-11-20 08:49:36.837307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.111 [2024-11-20 08:49:36.912212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:07.047 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.047 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:07.047 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YtGfagDfoA 00:16:07.047 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:07.305 [2024-11-20 08:49:38.142198] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:07.305 TLSTESTn1 00:16:07.564 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:07.564 Running I/O for 10 seconds... 00:16:09.445 3968.00 IOPS, 15.50 MiB/s [2024-11-20T08:49:41.735Z] 4010.50 IOPS, 15.67 MiB/s [2024-11-20T08:49:42.669Z] 4031.00 IOPS, 15.75 MiB/s [2024-11-20T08:49:43.604Z] 4062.50 IOPS, 15.87 MiB/s [2024-11-20T08:49:44.540Z] 4054.20 IOPS, 15.84 MiB/s [2024-11-20T08:49:45.512Z] 4052.50 IOPS, 15.83 MiB/s [2024-11-20T08:49:46.444Z] 4058.71 IOPS, 15.85 MiB/s [2024-11-20T08:49:47.376Z] 4063.75 IOPS, 15.87 MiB/s [2024-11-20T08:49:48.749Z] 4071.56 IOPS, 15.90 MiB/s [2024-11-20T08:49:48.749Z] 4069.90 IOPS, 15.90 MiB/s 00:16:17.834 Latency(us) 00:16:17.834 [2024-11-20T08:49:48.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.834 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:17.834 Verification LBA range: start 0x0 length 0x2000 00:16:17.834 TLSTESTn1 : 10.02 4075.86 15.92 0.00 0.00 31342.61 5987.61 22520.55 00:16:17.834 [2024-11-20T08:49:48.749Z] =================================================================================================================== 00:16:17.834 [2024-11-20T08:49:48.749Z] Total : 4075.86 15.92 0.00 0.00 31342.61 5987.61 22520.55 00:16:17.834 { 00:16:17.834 "results": [ 00:16:17.834 { 00:16:17.834 "job": "TLSTESTn1", 00:16:17.834 "core_mask": "0x4", 00:16:17.834 "workload": "verify", 00:16:17.834 "status": "finished", 00:16:17.834 "verify_range": { 00:16:17.834 "start": 0, 00:16:17.834 "length": 8192 00:16:17.834 }, 00:16:17.834 "queue_depth": 128, 00:16:17.834 "io_size": 4096, 00:16:17.834 "runtime": 10.01653, 00:16:17.834 "iops": 4075.8625991236486, 00:16:17.834 "mibps": 15.921338277826752, 00:16:17.834 "io_failed": 0, 00:16:17.834 "io_timeout": 0, 00:16:17.834 "avg_latency_us": 31342.607760651634, 00:16:17.834 "min_latency_us": 5987.607272727273, 00:16:17.834 "max_latency_us": 22520.552727272727 00:16:17.834 } 00:16:17.834 ], 00:16:17.834 "core_count": 1 00:16:17.834 } 00:16:17.834 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:17.834 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72222 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72222 ']' 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72222 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72222 00:16:17.835 killing process with pid 72222 00:16:17.835 Received shutdown signal, test time was about 10.000000 seconds 00:16:17.835 00:16:17.835 Latency(us) 00:16:17.835 [2024-11-20T08:49:48.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.835 [2024-11-20T08:49:48.750Z] =================================================================================================================== 00:16:17.835 [2024-11-20T08:49:48.750Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72222' 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72222 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72222 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.YtGfagDfoA 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YtGfagDfoA 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YtGfagDfoA 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YtGfagDfoA 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YtGfagDfoA 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72362 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72362 /var/tmp/bdevperf.sock 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72362 ']' 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:17.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.835 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.094 [2024-11-20 08:49:48.753312] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:18.094 [2024-11-20 08:49:48.753580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72362 ] 00:16:18.094 [2024-11-20 08:49:48.903786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.094 [2024-11-20 08:49:48.976284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.353 [2024-11-20 08:49:49.047808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:18.353 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.353 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:18.353 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YtGfagDfoA 00:16:18.611 [2024-11-20 08:49:49.408966] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YtGfagDfoA': 0100666 00:16:18.611 [2024-11-20 08:49:49.409234] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:18.611 request: 00:16:18.611 { 00:16:18.611 "name": "key0", 00:16:18.611 "path": "/tmp/tmp.YtGfagDfoA", 00:16:18.611 "method": "keyring_file_add_key", 00:16:18.611 "req_id": 1 00:16:18.611 } 00:16:18.611 Got JSON-RPC error response 00:16:18.611 response: 00:16:18.611 { 00:16:18.611 "code": -1, 00:16:18.611 "message": "Operation not permitted" 00:16:18.611 } 00:16:18.611 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:18.934 [2024-11-20 08:49:49.657158] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:18.934 [2024-11-20 08:49:49.657260] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:18.934 request: 00:16:18.934 { 00:16:18.934 "name": "TLSTEST", 00:16:18.934 "trtype": "tcp", 00:16:18.934 "traddr": "10.0.0.3", 00:16:18.934 "adrfam": "ipv4", 00:16:18.934 "trsvcid": "4420", 00:16:18.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.934 "prchk_reftag": false, 00:16:18.934 "prchk_guard": false, 00:16:18.934 "hdgst": false, 00:16:18.934 "ddgst": false, 00:16:18.934 "psk": "key0", 00:16:18.934 "allow_unrecognized_csi": false, 00:16:18.934 "method": "bdev_nvme_attach_controller", 00:16:18.934 "req_id": 1 00:16:18.934 } 00:16:18.934 Got JSON-RPC error response 00:16:18.934 response: 00:16:18.934 { 00:16:18.934 "code": -126, 00:16:18.934 "message": "Required key not available" 00:16:18.934 } 00:16:18.934 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72362 00:16:18.934 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72362 ']' 00:16:18.934 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72362 00:16:18.934 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:18.934 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.934 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72362 00:16:18.934 killing process with pid 72362 00:16:18.934 Received shutdown signal, test time was about 10.000000 seconds 00:16:18.934 00:16:18.934 Latency(us) 00:16:18.934 [2024-11-20T08:49:49.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.934 [2024-11-20T08:49:49.849Z] =================================================================================================================== 00:16:18.934 [2024-11-20T08:49:49.849Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.934 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:18.934 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:18.934 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72362' 00:16:18.934 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72362 00:16:18.934 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72362 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72174 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72174 ']' 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72174 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72174 00:16:19.199 killing process with pid 72174 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72174' 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72174 00:16:19.199 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72174 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72395 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72395 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72395 ']' 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.458 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.458 [2024-11-20 08:49:50.332329] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:19.458 [2024-11-20 08:49:50.332437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.717 [2024-11-20 08:49:50.482443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.717 [2024-11-20 08:49:50.553867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.717 [2024-11-20 08:49:50.553944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.717 [2024-11-20 08:49:50.553957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.717 [2024-11-20 08:49:50.553966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.717 [2024-11-20 08:49:50.553974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.717 [2024-11-20 08:49:50.554441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.717 [2024-11-20 08:49:50.628783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:19.975 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.YtGfagDfoA 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.YtGfagDfoA 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.YtGfagDfoA 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YtGfagDfoA 00:16:19.976 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:20.235 [2024-11-20 08:49:51.023015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.235 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:20.494 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:20.752 [2024-11-20 08:49:51.615225] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:20.752 [2024-11-20 08:49:51.615559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:20.752 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:21.010 malloc0 00:16:21.269 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:21.527 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YtGfagDfoA 00:16:21.785 [2024-11-20 08:49:52.493552] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YtGfagDfoA': 0100666 00:16:21.785 [2024-11-20 08:49:52.493861] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:21.785 request: 00:16:21.785 { 00:16:21.785 "name": "key0", 00:16:21.785 "path": "/tmp/tmp.YtGfagDfoA", 00:16:21.785 "method": "keyring_file_add_key", 00:16:21.785 "req_id": 1 00:16:21.785 } 00:16:21.785 Got JSON-RPC error response 00:16:21.785 response: 00:16:21.785 { 00:16:21.785 "code": -1, 00:16:21.785 "message": "Operation not permitted" 00:16:21.785 } 00:16:21.785 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:22.044 [2024-11-20 08:49:52.753620] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:22.044 [2024-11-20 08:49:52.753968] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:22.044 request: 00:16:22.044 { 00:16:22.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.044 "host": "nqn.2016-06.io.spdk:host1", 00:16:22.044 "psk": "key0", 00:16:22.044 "method": "nvmf_subsystem_add_host", 00:16:22.044 "req_id": 1 00:16:22.044 } 00:16:22.044 Got JSON-RPC error response 00:16:22.044 response: 00:16:22.044 { 00:16:22.044 "code": -32603, 00:16:22.044 "message": "Internal error" 00:16:22.044 } 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72395 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72395 ']' 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72395 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72395 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:22.044 killing process with pid 72395 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72395' 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72395 00:16:22.044 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72395 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.YtGfagDfoA 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72457 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72457 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72457 ']' 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.302 08:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.302 [2024-11-20 08:49:53.165301] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:22.302 [2024-11-20 08:49:53.165400] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.560 [2024-11-20 08:49:53.320977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.560 [2024-11-20 08:49:53.401261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.560 [2024-11-20 08:49:53.401329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.560 [2024-11-20 08:49:53.401354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.560 [2024-11-20 08:49:53.401365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.560 [2024-11-20 08:49:53.401375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.560 [2024-11-20 08:49:53.401907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.819 [2024-11-20 08:49:53.474900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:23.386 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.386 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:23.386 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:23.386 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:23.386 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.386 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.386 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.YtGfagDfoA 00:16:23.386 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YtGfagDfoA 00:16:23.386 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:23.644 [2024-11-20 08:49:54.427295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.644 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:23.903 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:24.217 [2024-11-20 08:49:54.963407] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:24.217 [2024-11-20 08:49:54.963672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:24.217 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:24.490 malloc0 00:16:24.490 08:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:24.749 08:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YtGfagDfoA 00:16:25.007 08:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:25.266 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72512 00:16:25.266 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:25.266 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.266 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72512 /var/tmp/bdevperf.sock 00:16:25.266 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72512 ']' 00:16:25.266 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.266 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.266 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.266 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.266 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.266 [2024-11-20 08:49:56.106235] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:25.266 [2024-11-20 08:49:56.106320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72512 ] 00:16:25.523 [2024-11-20 08:49:56.251120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.523 [2024-11-20 08:49:56.314111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.523 [2024-11-20 08:49:56.385368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.780 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.780 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:25.780 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YtGfagDfoA 00:16:26.038 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:26.297 [2024-11-20 08:49:57.010402] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:26.297 TLSTESTn1 00:16:26.297 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:26.556 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:26.556 "subsystems": [ 00:16:26.556 { 00:16:26.556 "subsystem": "keyring", 00:16:26.556 "config": [ 00:16:26.556 { 00:16:26.556 "method": "keyring_file_add_key", 00:16:26.556 "params": { 00:16:26.556 "name": "key0", 00:16:26.556 "path": "/tmp/tmp.YtGfagDfoA" 00:16:26.556 } 00:16:26.556 } 00:16:26.556 ] 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "subsystem": "iobuf", 00:16:26.556 "config": [ 00:16:26.556 { 00:16:26.556 "method": "iobuf_set_options", 00:16:26.556 "params": { 00:16:26.556 "small_pool_count": 8192, 00:16:26.556 "large_pool_count": 1024, 00:16:26.556 "small_bufsize": 8192, 00:16:26.556 "large_bufsize": 135168, 00:16:26.556 "enable_numa": false 00:16:26.556 } 00:16:26.556 } 00:16:26.556 ] 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "subsystem": "sock", 00:16:26.556 "config": [ 00:16:26.556 { 00:16:26.556 "method": "sock_set_default_impl", 00:16:26.556 "params": { 00:16:26.556 "impl_name": "uring" 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "sock_impl_set_options", 00:16:26.556 "params": { 00:16:26.556 "impl_name": "ssl", 00:16:26.556 "recv_buf_size": 4096, 00:16:26.556 "send_buf_size": 4096, 00:16:26.556 "enable_recv_pipe": true, 00:16:26.556 "enable_quickack": false, 00:16:26.556 "enable_placement_id": 0, 00:16:26.556 "enable_zerocopy_send_server": true, 00:16:26.556 "enable_zerocopy_send_client": false, 00:16:26.556 "zerocopy_threshold": 0, 00:16:26.556 "tls_version": 0, 00:16:26.556 "enable_ktls": false 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "sock_impl_set_options", 00:16:26.556 "params": { 00:16:26.556 "impl_name": "posix", 00:16:26.556 "recv_buf_size": 2097152, 00:16:26.556 "send_buf_size": 2097152, 00:16:26.556 "enable_recv_pipe": true, 00:16:26.556 "enable_quickack": false, 00:16:26.556 "enable_placement_id": 0, 00:16:26.556 "enable_zerocopy_send_server": true, 00:16:26.556 "enable_zerocopy_send_client": false, 00:16:26.556 "zerocopy_threshold": 0, 00:16:26.556 "tls_version": 0, 00:16:26.556 "enable_ktls": false 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "sock_impl_set_options", 00:16:26.556 "params": { 00:16:26.556 "impl_name": "uring", 00:16:26.556 "recv_buf_size": 2097152, 00:16:26.556 "send_buf_size": 2097152, 00:16:26.556 "enable_recv_pipe": true, 00:16:26.556 "enable_quickack": false, 00:16:26.556 "enable_placement_id": 0, 00:16:26.556 "enable_zerocopy_send_server": false, 00:16:26.556 "enable_zerocopy_send_client": false, 00:16:26.556 "zerocopy_threshold": 0, 00:16:26.556 "tls_version": 0, 00:16:26.556 "enable_ktls": false 00:16:26.556 } 00:16:26.556 } 00:16:26.556 ] 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "subsystem": "vmd", 00:16:26.556 "config": [] 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "subsystem": "accel", 00:16:26.556 "config": [ 00:16:26.556 { 00:16:26.556 "method": "accel_set_options", 00:16:26.556 "params": { 00:16:26.556 "small_cache_size": 128, 00:16:26.556 "large_cache_size": 16, 00:16:26.556 "task_count": 2048, 00:16:26.556 "sequence_count": 2048, 00:16:26.556 "buf_count": 2048 00:16:26.556 } 00:16:26.556 } 00:16:26.556 ] 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "subsystem": "bdev", 00:16:26.556 "config": [ 00:16:26.556 { 00:16:26.556 "method": "bdev_set_options", 00:16:26.556 "params": { 00:16:26.556 "bdev_io_pool_size": 65535, 00:16:26.556 "bdev_io_cache_size": 256, 00:16:26.556 "bdev_auto_examine": true, 00:16:26.556 "iobuf_small_cache_size": 128, 00:16:26.556 "iobuf_large_cache_size": 16 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "bdev_raid_set_options", 00:16:26.556 "params": { 00:16:26.556 "process_window_size_kb": 1024, 00:16:26.556 "process_max_bandwidth_mb_sec": 0 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "bdev_iscsi_set_options", 00:16:26.556 "params": { 00:16:26.556 "timeout_sec": 30 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "bdev_nvme_set_options", 00:16:26.556 "params": { 00:16:26.556 "action_on_timeout": "none", 00:16:26.556 "timeout_us": 0, 00:16:26.556 "timeout_admin_us": 0, 00:16:26.556 "keep_alive_timeout_ms": 10000, 00:16:26.556 "arbitration_burst": 0, 00:16:26.556 "low_priority_weight": 0, 00:16:26.556 "medium_priority_weight": 0, 00:16:26.556 "high_priority_weight": 0, 00:16:26.556 "nvme_adminq_poll_period_us": 10000, 00:16:26.556 "nvme_ioq_poll_period_us": 0, 00:16:26.556 "io_queue_requests": 0, 00:16:26.556 "delay_cmd_submit": true, 00:16:26.556 "transport_retry_count": 4, 00:16:26.556 "bdev_retry_count": 3, 00:16:26.556 "transport_ack_timeout": 0, 00:16:26.556 "ctrlr_loss_timeout_sec": 0, 00:16:26.556 "reconnect_delay_sec": 0, 00:16:26.556 "fast_io_fail_timeout_sec": 0, 00:16:26.556 "disable_auto_failback": false, 00:16:26.556 "generate_uuids": false, 00:16:26.556 "transport_tos": 0, 00:16:26.556 "nvme_error_stat": false, 00:16:26.556 "rdma_srq_size": 0, 00:16:26.556 "io_path_stat": false, 00:16:26.556 "allow_accel_sequence": false, 00:16:26.556 "rdma_max_cq_size": 0, 00:16:26.556 "rdma_cm_event_timeout_ms": 0, 00:16:26.556 "dhchap_digests": [ 00:16:26.556 "sha256", 00:16:26.556 "sha384", 00:16:26.556 "sha512" 00:16:26.556 ], 00:16:26.556 "dhchap_dhgroups": [ 00:16:26.556 "null", 00:16:26.556 "ffdhe2048", 00:16:26.556 "ffdhe3072", 00:16:26.556 "ffdhe4096", 00:16:26.556 "ffdhe6144", 00:16:26.556 "ffdhe8192" 00:16:26.556 ] 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "bdev_nvme_set_hotplug", 00:16:26.556 "params": { 00:16:26.556 "period_us": 100000, 00:16:26.556 "enable": false 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "bdev_malloc_create", 00:16:26.556 "params": { 00:16:26.556 "name": "malloc0", 00:16:26.556 "num_blocks": 8192, 00:16:26.556 "block_size": 4096, 00:16:26.556 "physical_block_size": 4096, 00:16:26.556 "uuid": "f987a9aa-e85c-4fb5-a08a-754ebe9f9181", 00:16:26.556 "optimal_io_boundary": 0, 00:16:26.556 "md_size": 0, 00:16:26.556 "dif_type": 0, 00:16:26.556 "dif_is_head_of_md": false, 00:16:26.556 "dif_pi_format": 0 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "bdev_wait_for_examine" 00:16:26.556 } 00:16:26.556 ] 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "subsystem": "nbd", 00:16:26.556 "config": [] 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "subsystem": "scheduler", 00:16:26.556 "config": [ 00:16:26.556 { 00:16:26.556 "method": "framework_set_scheduler", 00:16:26.556 "params": { 00:16:26.556 "name": "static" 00:16:26.556 } 00:16:26.556 } 00:16:26.556 ] 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "subsystem": "nvmf", 00:16:26.556 "config": [ 00:16:26.556 { 00:16:26.556 "method": "nvmf_set_config", 00:16:26.556 "params": { 00:16:26.556 "discovery_filter": "match_any", 00:16:26.556 "admin_cmd_passthru": { 00:16:26.556 "identify_ctrlr": false 00:16:26.556 }, 00:16:26.556 "dhchap_digests": [ 00:16:26.556 "sha256", 00:16:26.556 "sha384", 00:16:26.556 "sha512" 00:16:26.556 ], 00:16:26.556 "dhchap_dhgroups": [ 00:16:26.556 "null", 00:16:26.556 "ffdhe2048", 00:16:26.556 "ffdhe3072", 00:16:26.556 "ffdhe4096", 00:16:26.556 "ffdhe6144", 00:16:26.556 "ffdhe8192" 00:16:26.556 ] 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "nvmf_set_max_subsystems", 00:16:26.556 "params": { 00:16:26.556 "max_subsystems": 1024 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "nvmf_set_crdt", 00:16:26.556 "params": { 00:16:26.556 "crdt1": 0, 00:16:26.556 "crdt2": 0, 00:16:26.556 "crdt3": 0 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "nvmf_create_transport", 00:16:26.556 "params": { 00:16:26.556 "trtype": "TCP", 00:16:26.556 "max_queue_depth": 128, 00:16:26.556 "max_io_qpairs_per_ctrlr": 127, 00:16:26.556 "in_capsule_data_size": 4096, 00:16:26.556 "max_io_size": 131072, 00:16:26.556 "io_unit_size": 131072, 00:16:26.556 "max_aq_depth": 128, 00:16:26.556 "num_shared_buffers": 511, 00:16:26.556 "buf_cache_size": 4294967295, 00:16:26.556 "dif_insert_or_strip": false, 00:16:26.556 "zcopy": false, 00:16:26.556 "c2h_success": false, 00:16:26.556 "sock_priority": 0, 00:16:26.556 "abort_timeout_sec": 1, 00:16:26.556 "ack_timeout": 0, 00:16:26.556 "data_wr_pool_size": 0 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "nvmf_create_subsystem", 00:16:26.556 "params": { 00:16:26.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.556 "allow_any_host": false, 00:16:26.556 "serial_number": "SPDK00000000000001", 00:16:26.556 "model_number": "SPDK bdev Controller", 00:16:26.556 "max_namespaces": 10, 00:16:26.556 "min_cntlid": 1, 00:16:26.556 "max_cntlid": 65519, 00:16:26.556 "ana_reporting": false 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "nvmf_subsystem_add_host", 00:16:26.556 "params": { 00:16:26.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.556 "host": "nqn.2016-06.io.spdk:host1", 00:16:26.556 "psk": "key0" 00:16:26.556 } 00:16:26.556 }, 00:16:26.556 { 00:16:26.556 "method": "nvmf_subsystem_add_ns", 00:16:26.556 "params": { 00:16:26.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.556 "namespace": { 00:16:26.556 "nsid": 1, 00:16:26.556 "bdev_name": "malloc0", 00:16:26.556 "nguid": "F987A9AAE85C4FB5A08A754EBE9F9181", 00:16:26.557 "uuid": "f987a9aa-e85c-4fb5-a08a-754ebe9f9181", 00:16:26.557 "no_auto_visible": false 00:16:26.557 } 00:16:26.557 } 00:16:26.557 }, 00:16:26.557 { 00:16:26.557 "method": "nvmf_subsystem_add_listener", 00:16:26.557 "params": { 00:16:26.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.557 "listen_address": { 00:16:26.557 "trtype": "TCP", 00:16:26.557 "adrfam": "IPv4", 00:16:26.557 "traddr": "10.0.0.3", 00:16:26.557 "trsvcid": "4420" 00:16:26.557 }, 00:16:26.557 "secure_channel": true 00:16:26.557 } 00:16:26.557 } 00:16:26.557 ] 00:16:26.557 } 00:16:26.557 ] 00:16:26.557 }' 00:16:26.557 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:27.124 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:27.124 "subsystems": [ 00:16:27.124 { 00:16:27.124 "subsystem": "keyring", 00:16:27.124 "config": [ 00:16:27.124 { 00:16:27.124 "method": "keyring_file_add_key", 00:16:27.124 "params": { 00:16:27.124 "name": "key0", 00:16:27.124 "path": "/tmp/tmp.YtGfagDfoA" 00:16:27.124 } 00:16:27.124 } 00:16:27.124 ] 00:16:27.124 }, 00:16:27.124 { 00:16:27.124 "subsystem": "iobuf", 00:16:27.124 "config": [ 00:16:27.124 { 00:16:27.124 "method": "iobuf_set_options", 00:16:27.124 "params": { 00:16:27.124 "small_pool_count": 8192, 00:16:27.124 "large_pool_count": 1024, 00:16:27.124 "small_bufsize": 8192, 00:16:27.124 "large_bufsize": 135168, 00:16:27.124 "enable_numa": false 00:16:27.124 } 00:16:27.124 } 00:16:27.124 ] 00:16:27.124 }, 00:16:27.124 { 00:16:27.124 "subsystem": "sock", 00:16:27.124 "config": [ 00:16:27.124 { 00:16:27.124 "method": "sock_set_default_impl", 00:16:27.124 "params": { 00:16:27.124 "impl_name": "uring" 00:16:27.124 } 00:16:27.124 }, 00:16:27.124 { 00:16:27.124 "method": "sock_impl_set_options", 00:16:27.124 "params": { 00:16:27.124 "impl_name": "ssl", 00:16:27.124 "recv_buf_size": 4096, 00:16:27.124 "send_buf_size": 4096, 00:16:27.124 "enable_recv_pipe": true, 00:16:27.124 "enable_quickack": false, 00:16:27.124 "enable_placement_id": 0, 00:16:27.124 "enable_zerocopy_send_server": true, 00:16:27.124 "enable_zerocopy_send_client": false, 00:16:27.125 "zerocopy_threshold": 0, 00:16:27.125 "tls_version": 0, 00:16:27.125 "enable_ktls": false 00:16:27.125 } 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "method": "sock_impl_set_options", 00:16:27.125 "params": { 00:16:27.125 "impl_name": "posix", 00:16:27.125 "recv_buf_size": 2097152, 00:16:27.125 "send_buf_size": 2097152, 00:16:27.125 "enable_recv_pipe": true, 00:16:27.125 "enable_quickack": false, 00:16:27.125 "enable_placement_id": 0, 00:16:27.125 "enable_zerocopy_send_server": true, 00:16:27.125 "enable_zerocopy_send_client": false, 00:16:27.125 "zerocopy_threshold": 0, 00:16:27.125 "tls_version": 0, 00:16:27.125 "enable_ktls": false 00:16:27.125 } 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "method": "sock_impl_set_options", 00:16:27.125 "params": { 00:16:27.125 "impl_name": "uring", 00:16:27.125 "recv_buf_size": 2097152, 00:16:27.125 "send_buf_size": 2097152, 00:16:27.125 "enable_recv_pipe": true, 00:16:27.125 "enable_quickack": false, 00:16:27.125 "enable_placement_id": 0, 00:16:27.125 "enable_zerocopy_send_server": false, 00:16:27.125 "enable_zerocopy_send_client": false, 00:16:27.125 "zerocopy_threshold": 0, 00:16:27.125 "tls_version": 0, 00:16:27.125 "enable_ktls": false 00:16:27.125 } 00:16:27.125 } 00:16:27.125 ] 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "subsystem": "vmd", 00:16:27.125 "config": [] 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "subsystem": "accel", 00:16:27.125 "config": [ 00:16:27.125 { 00:16:27.125 "method": "accel_set_options", 00:16:27.125 "params": { 00:16:27.125 "small_cache_size": 128, 00:16:27.125 "large_cache_size": 16, 00:16:27.125 "task_count": 2048, 00:16:27.125 "sequence_count": 2048, 00:16:27.125 "buf_count": 2048 00:16:27.125 } 00:16:27.125 } 00:16:27.125 ] 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "subsystem": "bdev", 00:16:27.125 "config": [ 00:16:27.125 { 00:16:27.125 "method": "bdev_set_options", 00:16:27.125 "params": { 00:16:27.125 "bdev_io_pool_size": 65535, 00:16:27.125 "bdev_io_cache_size": 256, 00:16:27.125 "bdev_auto_examine": true, 00:16:27.125 "iobuf_small_cache_size": 128, 00:16:27.125 "iobuf_large_cache_size": 16 00:16:27.125 } 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "method": "bdev_raid_set_options", 00:16:27.125 "params": { 00:16:27.125 "process_window_size_kb": 1024, 00:16:27.125 "process_max_bandwidth_mb_sec": 0 00:16:27.125 } 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "method": "bdev_iscsi_set_options", 00:16:27.125 "params": { 00:16:27.125 "timeout_sec": 30 00:16:27.125 } 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "method": "bdev_nvme_set_options", 00:16:27.125 "params": { 00:16:27.125 "action_on_timeout": "none", 00:16:27.125 "timeout_us": 0, 00:16:27.125 "timeout_admin_us": 0, 00:16:27.125 "keep_alive_timeout_ms": 10000, 00:16:27.125 "arbitration_burst": 0, 00:16:27.125 "low_priority_weight": 0, 00:16:27.125 "medium_priority_weight": 0, 00:16:27.125 "high_priority_weight": 0, 00:16:27.125 "nvme_adminq_poll_period_us": 10000, 00:16:27.125 "nvme_ioq_poll_period_us": 0, 00:16:27.125 "io_queue_requests": 512, 00:16:27.125 "delay_cmd_submit": true, 00:16:27.125 "transport_retry_count": 4, 00:16:27.125 "bdev_retry_count": 3, 00:16:27.125 "transport_ack_timeout": 0, 00:16:27.125 "ctrlr_loss_timeout_sec": 0, 00:16:27.125 "reconnect_delay_sec": 0, 00:16:27.125 "fast_io_fail_timeout_sec": 0, 00:16:27.125 "disable_auto_failback": false, 00:16:27.125 "generate_uuids": false, 00:16:27.125 "transport_tos": 0, 00:16:27.125 "nvme_error_stat": false, 00:16:27.125 "rdma_srq_size": 0, 00:16:27.125 "io_path_stat": false, 00:16:27.125 "allow_accel_sequence": false, 00:16:27.125 "rdma_max_cq_size": 0, 00:16:27.125 "rdma_cm_event_timeout_ms": 0, 00:16:27.125 "dhchap_digests": [ 00:16:27.125 "sha256", 00:16:27.125 "sha384", 00:16:27.125 "sha512" 00:16:27.125 ], 00:16:27.125 "dhchap_dhgroups": [ 00:16:27.125 "null", 00:16:27.125 "ffdhe2048", 00:16:27.125 "ffdhe3072", 00:16:27.125 "ffdhe4096", 00:16:27.125 "ffdhe6144", 00:16:27.125 "ffdhe8192" 00:16:27.125 ] 00:16:27.125 } 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "method": "bdev_nvme_attach_controller", 00:16:27.125 "params": { 00:16:27.125 "name": "TLSTEST", 00:16:27.125 "trtype": "TCP", 00:16:27.125 "adrfam": "IPv4", 00:16:27.125 "traddr": "10.0.0.3", 00:16:27.125 "trsvcid": "4420", 00:16:27.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.125 "prchk_reftag": false, 00:16:27.125 "prchk_guard": false, 00:16:27.125 "ctrlr_loss_timeout_sec": 0, 00:16:27.125 "reconnect_delay_sec": 0, 00:16:27.125 "fast_io_fail_timeout_sec": 0, 00:16:27.125 "psk": "key0", 00:16:27.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.125 "hdgst": false, 00:16:27.125 "ddgst": false, 00:16:27.125 "multipath": "multipath" 00:16:27.125 } 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "method": "bdev_nvme_set_hotplug", 00:16:27.125 "params": { 00:16:27.125 "period_us": 100000, 00:16:27.125 "enable": false 00:16:27.125 } 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "method": "bdev_wait_for_examine" 00:16:27.125 } 00:16:27.125 ] 00:16:27.125 }, 00:16:27.125 { 00:16:27.125 "subsystem": "nbd", 00:16:27.125 "config": [] 00:16:27.125 } 00:16:27.125 ] 00:16:27.125 }' 00:16:27.125 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72512 00:16:27.125 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72512 ']' 00:16:27.125 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72512 00:16:27.125 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.125 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.125 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72512 00:16:27.125 killing process with pid 72512 00:16:27.125 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.125 00:16:27.125 Latency(us) 00:16:27.125 [2024-11-20T08:49:58.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.125 [2024-11-20T08:49:58.040Z] =================================================================================================================== 00:16:27.125 [2024-11-20T08:49:58.040Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.125 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:27.125 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:27.125 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72512' 00:16:27.125 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72512 00:16:27.125 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72512 00:16:27.385 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72457 00:16:27.385 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72457 ']' 00:16:27.385 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72457 00:16:27.385 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.385 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.385 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72457 00:16:27.385 killing process with pid 72457 00:16:27.385 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:27.385 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:27.385 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72457' 00:16:27.385 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72457 00:16:27.385 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72457 00:16:27.644 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:27.644 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:27.644 "subsystems": [ 00:16:27.644 { 00:16:27.644 "subsystem": "keyring", 00:16:27.644 "config": [ 00:16:27.644 { 00:16:27.644 "method": "keyring_file_add_key", 00:16:27.644 "params": { 00:16:27.644 "name": "key0", 00:16:27.644 "path": "/tmp/tmp.YtGfagDfoA" 00:16:27.644 } 00:16:27.644 } 00:16:27.644 ] 00:16:27.644 }, 00:16:27.644 { 00:16:27.644 "subsystem": "iobuf", 00:16:27.644 "config": [ 00:16:27.644 { 00:16:27.644 "method": "iobuf_set_options", 00:16:27.644 "params": { 00:16:27.644 "small_pool_count": 8192, 00:16:27.644 "large_pool_count": 1024, 00:16:27.644 "small_bufsize": 8192, 00:16:27.644 "large_bufsize": 135168, 00:16:27.644 "enable_numa": false 00:16:27.644 } 00:16:27.644 } 00:16:27.644 ] 00:16:27.644 }, 00:16:27.644 { 00:16:27.644 "subsystem": "sock", 00:16:27.644 "config": [ 00:16:27.644 { 00:16:27.644 "method": "sock_set_default_impl", 00:16:27.644 "params": { 00:16:27.644 "impl_name": "uring" 00:16:27.644 } 00:16:27.644 }, 00:16:27.644 { 00:16:27.644 "method": "sock_impl_set_options", 00:16:27.644 "params": { 00:16:27.644 "impl_name": "ssl", 00:16:27.644 "recv_buf_size": 4096, 00:16:27.644 "send_buf_size": 4096, 00:16:27.644 "enable_recv_pipe": true, 00:16:27.644 "enable_quickack": false, 00:16:27.644 "enable_placement_id": 0, 00:16:27.644 "enable_zerocopy_send_server": true, 00:16:27.644 "enable_zerocopy_send_client": false, 00:16:27.644 "zerocopy_threshold": 0, 00:16:27.644 "tls_version": 0, 00:16:27.644 "enable_ktls": false 00:16:27.644 } 00:16:27.644 }, 00:16:27.644 { 00:16:27.644 "method": "sock_impl_set_options", 00:16:27.644 "params": { 00:16:27.644 "impl_name": "posix", 00:16:27.644 "recv_buf_size": 2097152, 00:16:27.644 "send_buf_size": 2097152, 00:16:27.644 "enable_recv_pipe": true, 00:16:27.644 "enable_quickack": false, 00:16:27.644 "enable_placement_id": 0, 00:16:27.644 "enable_zerocopy_send_server": true, 00:16:27.644 "enable_zerocopy_send_client": false, 00:16:27.644 "zerocopy_threshold": 0, 00:16:27.644 "tls_version": 0, 00:16:27.644 "enable_ktls": false 00:16:27.644 } 00:16:27.644 }, 00:16:27.644 { 00:16:27.644 "method": "sock_impl_set_options", 00:16:27.644 "params": { 00:16:27.644 "impl_name": "uring", 00:16:27.644 "recv_buf_size": 2097152, 00:16:27.644 "send_buf_size": 2097152, 00:16:27.644 "enable_recv_pipe": true, 00:16:27.644 "enable_quickack": false, 00:16:27.644 "enable_placement_id": 0, 00:16:27.644 "enable_zerocopy_send_server": false, 00:16:27.644 "enable_zerocopy_send_client": false, 00:16:27.644 "zerocopy_threshold": 0, 00:16:27.644 "tls_version": 0, 00:16:27.644 "enable_ktls": false 00:16:27.644 } 00:16:27.644 } 00:16:27.644 ] 00:16:27.644 }, 00:16:27.644 { 00:16:27.644 "subsystem": "vmd", 00:16:27.644 "config": [] 00:16:27.644 }, 00:16:27.644 { 00:16:27.644 "subsystem": "accel", 00:16:27.644 "config": [ 00:16:27.644 { 00:16:27.644 "method": "accel_set_options", 00:16:27.644 "params": { 00:16:27.644 "small_cache_size": 128, 00:16:27.644 "large_cache_size": 16, 00:16:27.644 "task_count": 2048, 00:16:27.644 "sequence_count": 2048, 00:16:27.644 "buf_count": 2048 00:16:27.644 } 00:16:27.644 } 00:16:27.644 ] 00:16:27.644 }, 00:16:27.644 { 00:16:27.644 "subsystem": "bdev", 00:16:27.644 "config": [ 00:16:27.644 { 00:16:27.644 "method": "bdev_set_options", 00:16:27.644 "params": { 00:16:27.644 "bdev_io_pool_size": 65535, 00:16:27.644 "bdev_io_cache_size": 256, 00:16:27.644 "bdev_auto_examine": true, 00:16:27.644 "iobuf_small_cache_size": 128, 00:16:27.644 "iobuf_large_cache_size": 16 00:16:27.644 } 00:16:27.644 }, 00:16:27.644 { 00:16:27.644 "method": "bdev_raid_set_options", 00:16:27.644 "params": { 00:16:27.644 "process_window_size_kb": 1024, 00:16:27.644 "process_max_bandwidth_mb_sec": 0 00:16:27.644 } 00:16:27.644 }, 00:16:27.644 { 00:16:27.644 "method": "bdev_iscsi_set_options", 00:16:27.644 "params": { 00:16:27.644 "timeout_sec": 30 00:16:27.644 } 00:16:27.644 }, 00:16:27.644 { 00:16:27.644 "method": "bdev_nvme_set_options", 00:16:27.644 "params": { 00:16:27.644 "action_on_timeout": "none", 00:16:27.644 "timeout_us": 0, 00:16:27.644 "timeout_admin_us": 0, 00:16:27.644 "keep_alive_timeout_ms": 10000, 00:16:27.644 "arbitration_burst": 0, 00:16:27.644 "low_priority_weight": 0, 00:16:27.644 "medium_priority_weight": 0, 00:16:27.644 "high_priority_weight": 0, 00:16:27.644 "nvme_adminq_poll_period_us": 10000, 00:16:27.644 "nvme_ioq_poll_period_us": 0, 00:16:27.644 "io_queue_requests": 0, 00:16:27.644 "delay_cmd_submit": true, 00:16:27.644 "transport_retry_count": 4, 00:16:27.644 "bdev_retry_count": 3, 00:16:27.644 "transport_ack_timeout": 0, 00:16:27.644 "ctrlr_loss_timeout_sec": 0, 00:16:27.644 "reconnect_delay_sec": 0, 00:16:27.644 "fast_io_fail_timeout_sec": 0, 00:16:27.644 "disable_auto_failback": false, 00:16:27.644 "generate_uuids": false, 00:16:27.644 "transport_tos": 0, 00:16:27.645 "nvme_error_stat": false, 00:16:27.645 "rdma_srq_size": 0, 00:16:27.645 "io_path_stat": false, 00:16:27.645 "allow_accel_sequence": false, 00:16:27.645 "rdma_max_cq_size": 0, 00:16:27.645 "rdma_cm_event_timeout_ms": 0, 00:16:27.645 "dhchap_digests": [ 00:16:27.645 "sha256", 00:16:27.645 "sha384", 00:16:27.645 "sha512" 00:16:27.645 ], 00:16:27.645 "dhchap_dhgroups": [ 00:16:27.645 "null", 00:16:27.645 "ffdhe2048", 00:16:27.645 "ffdhe3072", 00:16:27.645 "ffdhe4096", 00:16:27.645 "ffdhe6144", 00:16:27.645 "ffdhe8192" 00:16:27.645 ] 00:16:27.645 } 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "method": "bdev_nvme_set_hotplug", 00:16:27.645 "params": { 00:16:27.645 "period_us": 100000, 00:16:27.645 "enable": false 00:16:27.645 } 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "method": "bdev_malloc_create", 00:16:27.645 "params": { 00:16:27.645 "name": "malloc0", 00:16:27.645 "num_blocks": 8192, 00:16:27.645 "block_size": 4096, 00:16:27.645 "physical_block_size": 4096, 00:16:27.645 "uuid": "f987a9aa-e85c-4fb5-a08a-754ebe9f9181", 00:16:27.645 "optimal_io_boundary": 0, 00:16:27.645 "md_size": 0, 00:16:27.645 "dif_type": 0, 00:16:27.645 "dif_is_head_of_md": false, 00:16:27.645 "dif_pi_format": 0 00:16:27.645 } 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "method": "bdev_wait_for_examine" 00:16:27.645 } 00:16:27.645 ] 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "subsystem": "nbd", 00:16:27.645 "config": [] 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "subsystem": "scheduler", 00:16:27.645 "config": [ 00:16:27.645 { 00:16:27.645 "method": "framework_set_scheduler", 00:16:27.645 "params": { 00:16:27.645 "name": "static" 00:16:27.645 } 00:16:27.645 } 00:16:27.645 ] 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "subsystem": "nvmf", 00:16:27.645 "config": [ 00:16:27.645 { 00:16:27.645 "method": "nvmf_set_config", 00:16:27.645 "params": { 00:16:27.645 "discovery_filter": "match_any", 00:16:27.645 "admin_cmd_passthru": { 00:16:27.645 "identify_ctrlr": false 00:16:27.645 }, 00:16:27.645 "dhchap_digests": [ 00:16:27.645 "sha256", 00:16:27.645 "sha384", 00:16:27.645 "sha512" 00:16:27.645 ], 00:16:27.645 "dhchap_dhgroups": [ 00:16:27.645 "null", 00:16:27.645 "ffdhe2048", 00:16:27.645 "ffdhe3072", 00:16:27.645 "ffdhe4096", 00:16:27.645 "ffdhe6144", 00:16:27.645 "ffdhe8192" 00:16:27.645 ] 00:16:27.645 } 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "method": "nvmf_set_max_subsystems", 00:16:27.645 "params": { 00:16:27.645 "max_subsystems": 1024 00:16:27.645 } 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "method": "nvmf_set_crdt", 00:16:27.645 "params": { 00:16:27.645 "crdt1": 0, 00:16:27.645 "crdt2": 0, 00:16:27.645 "crdt3": 0 00:16:27.645 } 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "method": "nvmf_create_transport", 00:16:27.645 "params": { 00:16:27.645 "trtype": "TCP", 00:16:27.645 "max_queue_depth": 128, 00:16:27.645 "max_io_qpairs_per_ctrlr": 127, 00:16:27.645 "in_capsule_data_size": 4096, 00:16:27.645 "max_io_size": 131072, 00:16:27.645 "io_unit_size": 131072, 00:16:27.645 "max_aq_depth": 128, 00:16:27.645 "num_shared_buffers": 511, 00:16:27.645 "buf_cache_size": 4294967295, 00:16:27.645 "dif_insert_or_strip": false, 00:16:27.645 "zcopy": false, 00:16:27.645 "c2h_success": false, 00:16:27.645 "sock_priority": 0, 00:16:27.645 "abort_timeout_sec": 1, 00:16:27.645 "ack_timeout": 0, 00:16:27.645 "data_wr_pool_size": 0 00:16:27.645 } 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "method": "nvmf_create_subsystem", 00:16:27.645 "params": { 00:16:27.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.645 "allow_any_host": false, 00:16:27.645 "serial_number": "SPDK00000000000001", 00:16:27.645 "model_number": "SPDK bdev Controller", 00:16:27.645 "max_namespaces": 10, 00:16:27.645 "min_cntlid": 1, 00:16:27.645 "max_cntlid": 65519, 00:16:27.645 "ana_reporting": false 00:16:27.645 } 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "method": "nvmf_subsystem_add_host", 00:16:27.645 "params": { 00:16:27.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.645 "host": "nqn.2016-06.io.spdk:host1", 00:16:27.645 "psk": "key0" 00:16:27.645 } 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "method": "nvmf_subsystem_add_ns", 00:16:27.645 "params": { 00:16:27.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.645 "namespace": { 00:16:27.645 "nsid": 1, 00:16:27.645 "bdev_name": "malloc0", 00:16:27.645 "nguid": "F987A9AAE85C4FB5A08A754EBE9F9181", 00:16:27.645 "uuid": "f987a9aa-e85c-4fb5-a08a-754ebe9f9181", 00:16:27.645 "no_auto_visible": false 00:16:27.645 } 00:16:27.645 } 00:16:27.645 }, 00:16:27.645 { 00:16:27.645 "method": "nvmf_subsystem_add_listener", 00:16:27.645 "params": { 00:16:27.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.645 "listen_address": { 00:16:27.645 "trtype": "TCP", 00:16:27.645 "adrfam": "IPv4", 00:16:27.645 "traddr": "10.0.0.3", 00:16:27.645 "trsvcid": "4420" 00:16:27.645 }, 00:16:27.645 "secure_channel": true 00:16:27.645 } 00:16:27.645 } 00:16:27.645 ] 00:16:27.645 } 00:16:27.645 ] 00:16:27.645 }' 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72560 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:27.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72560 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72560 ']' 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.645 08:49:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.645 [2024-11-20 08:49:58.427147] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:27.646 [2024-11-20 08:49:58.427423] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.904 [2024-11-20 08:49:58.571495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.904 [2024-11-20 08:49:58.641983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.904 [2024-11-20 08:49:58.642297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.904 [2024-11-20 08:49:58.642336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.904 [2024-11-20 08:49:58.642346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.904 [2024-11-20 08:49:58.642353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.904 [2024-11-20 08:49:58.642957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.163 [2024-11-20 08:49:58.828101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:28.163 [2024-11-20 08:49:58.920344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.163 [2024-11-20 08:49:58.952279] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:28.163 [2024-11-20 08:49:58.952720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72592 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72592 /var/tmp/bdevperf.sock 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72592 ']' 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.732 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:28.732 "subsystems": [ 00:16:28.732 { 00:16:28.732 "subsystem": "keyring", 00:16:28.732 "config": [ 00:16:28.732 { 00:16:28.732 "method": "keyring_file_add_key", 00:16:28.732 "params": { 00:16:28.732 "name": "key0", 00:16:28.732 "path": "/tmp/tmp.YtGfagDfoA" 00:16:28.732 } 00:16:28.732 } 00:16:28.732 ] 00:16:28.732 }, 00:16:28.732 { 00:16:28.732 "subsystem": "iobuf", 00:16:28.732 "config": [ 00:16:28.732 { 00:16:28.732 "method": "iobuf_set_options", 00:16:28.732 "params": { 00:16:28.732 "small_pool_count": 8192, 00:16:28.732 "large_pool_count": 1024, 00:16:28.732 "small_bufsize": 8192, 00:16:28.732 "large_bufsize": 135168, 00:16:28.732 "enable_numa": false 00:16:28.732 } 00:16:28.732 } 00:16:28.732 ] 00:16:28.732 }, 00:16:28.732 { 00:16:28.733 "subsystem": "sock", 00:16:28.733 "config": [ 00:16:28.733 { 00:16:28.733 "method": "sock_set_default_impl", 00:16:28.733 "params": { 00:16:28.733 "impl_name": "uring" 00:16:28.733 } 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "method": "sock_impl_set_options", 00:16:28.733 "params": { 00:16:28.733 "impl_name": "ssl", 00:16:28.733 "recv_buf_size": 4096, 00:16:28.733 "send_buf_size": 4096, 00:16:28.733 "enable_recv_pipe": true, 00:16:28.733 "enable_quickack": false, 00:16:28.733 "enable_placement_id": 0, 00:16:28.733 "enable_zerocopy_send_server": true, 00:16:28.733 "enable_zerocopy_send_client": false, 00:16:28.733 "zerocopy_threshold": 0, 00:16:28.733 "tls_version": 0, 00:16:28.733 "enable_ktls": false 00:16:28.733 } 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "method": "sock_impl_set_options", 00:16:28.733 "params": { 00:16:28.733 "impl_name": "posix", 00:16:28.733 "recv_buf_size": 2097152, 00:16:28.733 "send_buf_size": 2097152, 00:16:28.733 "enable_recv_pipe": true, 00:16:28.733 "enable_quickack": false, 00:16:28.733 "enable_placement_id": 0, 00:16:28.733 "enable_zerocopy_send_server": true, 00:16:28.733 "enable_zerocopy_send_client": false, 00:16:28.733 "zerocopy_threshold": 0, 00:16:28.733 "tls_version": 0, 00:16:28.733 "enable_ktls": false 00:16:28.733 } 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "method": "sock_impl_set_options", 00:16:28.733 "params": { 00:16:28.733 "impl_name": "uring", 00:16:28.733 "recv_buf_size": 2097152, 00:16:28.733 "send_buf_size": 2097152, 00:16:28.733 "enable_recv_pipe": true, 00:16:28.733 "enable_quickack": false, 00:16:28.733 "enable_placement_id": 0, 00:16:28.733 "enable_zerocopy_send_server": false, 00:16:28.733 "enable_zerocopy_send_client": false, 00:16:28.733 "zerocopy_threshold": 0, 00:16:28.733 "tls_version": 0, 00:16:28.733 "enable_ktls": false 00:16:28.733 } 00:16:28.733 } 00:16:28.733 ] 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "subsystem": "vmd", 00:16:28.733 "config": [] 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "subsystem": "accel", 00:16:28.733 "config": [ 00:16:28.733 { 00:16:28.733 "method": "accel_set_options", 00:16:28.733 "params": { 00:16:28.733 "small_cache_size": 128, 00:16:28.733 "large_cache_size": 16, 00:16:28.733 "task_count": 2048, 00:16:28.733 "sequence_count": 2048, 00:16:28.733 "buf_count": 2048 00:16:28.733 } 00:16:28.733 } 00:16:28.733 ] 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "subsystem": "bdev", 00:16:28.733 "config": [ 00:16:28.733 { 00:16:28.733 "method": "bdev_set_options", 00:16:28.733 "params": { 00:16:28.733 "bdev_io_pool_size": 65535, 00:16:28.733 "bdev_io_cache_size": 256, 00:16:28.733 "bdev_auto_examine": true, 00:16:28.733 "iobuf_small_cache_size": 128, 00:16:28.733 "iobuf_large_cache_size": 16 00:16:28.733 } 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "method": "bdev_raid_set_options", 00:16:28.733 "params": { 00:16:28.733 "process_window_size_kb": 1024, 00:16:28.733 "process_max_bandwidth_mb_sec": 0 00:16:28.733 } 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "method": "bdev_iscsi_set_options", 00:16:28.733 "params": { 00:16:28.733 "timeout_sec": 30 00:16:28.733 } 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "method": "bdev_nvme_set_options", 00:16:28.733 "params": { 00:16:28.733 "action_on_timeout": "none", 00:16:28.733 "timeout_us": 0, 00:16:28.733 "timeout_admin_us": 0, 00:16:28.733 "keep_alive_timeout_ms": 10000, 00:16:28.733 "arbitration_burst": 0, 00:16:28.733 "low_priority_weight": 0, 00:16:28.733 "medium_priority_weight": 0, 00:16:28.733 "high_priority_weight": 0, 00:16:28.733 "nvme_adminq_poll_period_us": 10000, 00:16:28.733 "nvme_ioq_poll_period_us": 0, 00:16:28.733 "io_queue_requests": 512, 00:16:28.733 "delay_cmd_submit": true, 00:16:28.733 "transport_retry_count": 4, 00:16:28.733 "bdev_retry_count": 3, 00:16:28.733 "transport_ack_timeout": 0, 00:16:28.733 "ctrlr_loss_timeout_sec": 0, 00:16:28.733 "reconnect_delay_sec": 0, 00:16:28.733 "fast_io_fail_timeout_sec": 0, 00:16:28.733 "disable_auto_failback": false, 00:16:28.733 "generate_uuids": false, 00:16:28.733 "transport_tos": 0, 00:16:28.733 "nvme_error_stat": false, 00:16:28.733 "rdma_srq_size": 0, 00:16:28.733 "io_path_stat": false, 00:16:28.733 "allow_accel_sequence": false, 00:16:28.733 "rdma_max_cq_size": 0, 00:16:28.733 "rdma_cm_event_timeout_ms": 0, 00:16:28.733 "dhchap_digests": [ 00:16:28.733 "sha256", 00:16:28.733 "sha384", 00:16:28.733 "sha512" 00:16:28.733 ], 00:16:28.733 "dhchap_dhgroups": [ 00:16:28.733 "null", 00:16:28.733 "ffdhe2048", 00:16:28.733 "ffdhe3072", 00:16:28.733 "ffdhe4096", 00:16:28.733 "ffdhe6144", 00:16:28.733 "ffdhe8192" 00:16:28.733 ] 00:16:28.733 } 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "method": "bdev_nvme_attach_controller", 00:16:28.733 "params": { 00:16:28.733 "name": "TLSTEST", 00:16:28.733 "trtype": "TCP", 00:16:28.733 "adrfam": "IPv4", 00:16:28.733 "traddr": "10.0.0.3", 00:16:28.733 "trsvcid": "4420", 00:16:28.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.733 "prchk_reftag": false, 00:16:28.733 "prchk_guard": false, 00:16:28.733 "ctrlr_loss_timeout_sec": 0, 00:16:28.733 "reconnect_delay_sec": 0, 00:16:28.733 "fast_io_fail_timeout_sec": 0, 00:16:28.733 "psk": "key0", 00:16:28.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.733 "hdgst": false, 00:16:28.733 "ddgst": false, 00:16:28.733 "multipath": "multipath" 00:16:28.733 } 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "method": "bdev_nvme_set_hotplug", 00:16:28.733 "params": { 00:16:28.733 "period_us": 100000, 00:16:28.733 "enable": false 00:16:28.733 } 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "method": "bdev_wait_for_examine" 00:16:28.733 } 00:16:28.733 ] 00:16:28.733 }, 00:16:28.733 { 00:16:28.733 "subsystem": "nbd", 00:16:28.733 "config": [] 00:16:28.733 } 00:16:28.733 ] 00:16:28.733 }' 00:16:28.733 [2024-11-20 08:49:59.574769] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:28.733 [2024-11-20 08:49:59.575075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72592 ] 00:16:28.992 [2024-11-20 08:49:59.720448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.992 [2024-11-20 08:49:59.795290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.264 [2024-11-20 08:49:59.948501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:29.264 [2024-11-20 08:50:00.009218] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:29.830 08:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.830 08:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:29.830 08:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:30.088 Running I/O for 10 seconds... 00:16:31.959 4224.00 IOPS, 16.50 MiB/s [2024-11-20T08:50:03.811Z] 4232.50 IOPS, 16.53 MiB/s [2024-11-20T08:50:05.187Z] 4208.00 IOPS, 16.44 MiB/s [2024-11-20T08:50:06.122Z] 4200.25 IOPS, 16.41 MiB/s [2024-11-20T08:50:07.059Z] 4194.20 IOPS, 16.38 MiB/s [2024-11-20T08:50:07.994Z] 4191.83 IOPS, 16.37 MiB/s [2024-11-20T08:50:08.929Z] 4193.43 IOPS, 16.38 MiB/s [2024-11-20T08:50:09.867Z] 4193.12 IOPS, 16.38 MiB/s [2024-11-20T08:50:10.803Z] 4190.56 IOPS, 16.37 MiB/s [2024-11-20T08:50:10.803Z] 4191.90 IOPS, 16.37 MiB/s 00:16:39.888 Latency(us) 00:16:39.888 [2024-11-20T08:50:10.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.889 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:39.889 Verification LBA range: start 0x0 length 0x2000 00:16:39.889 TLSTESTn1 : 10.02 4197.92 16.40 0.00 0.00 30436.93 5093.93 23116.33 00:16:39.889 [2024-11-20T08:50:10.804Z] =================================================================================================================== 00:16:39.889 [2024-11-20T08:50:10.804Z] Total : 4197.92 16.40 0.00 0.00 30436.93 5093.93 23116.33 00:16:39.889 { 00:16:39.889 "results": [ 00:16:39.889 { 00:16:39.889 "job": "TLSTESTn1", 00:16:39.889 "core_mask": "0x4", 00:16:39.889 "workload": "verify", 00:16:39.889 "status": "finished", 00:16:39.889 "verify_range": { 00:16:39.889 "start": 0, 00:16:39.889 "length": 8192 00:16:39.889 }, 00:16:39.889 "queue_depth": 128, 00:16:39.889 "io_size": 4096, 00:16:39.889 "runtime": 10.015915, 00:16:39.889 "iops": 4197.919011892573, 00:16:39.889 "mibps": 16.39812114020536, 00:16:39.889 "io_failed": 0, 00:16:39.889 "io_timeout": 0, 00:16:39.889 "avg_latency_us": 30436.92853662439, 00:16:39.889 "min_latency_us": 5093.9345454545455, 00:16:39.889 "max_latency_us": 23116.334545454545 00:16:39.889 } 00:16:39.889 ], 00:16:39.889 "core_count": 1 00:16:39.889 } 00:16:40.148 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:40.148 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72592 00:16:40.148 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72592 ']' 00:16:40.148 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72592 00:16:40.148 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:40.148 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.148 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72592 00:16:40.148 killing process with pid 72592 00:16:40.148 Received shutdown signal, test time was about 10.000000 seconds 00:16:40.149 00:16:40.149 Latency(us) 00:16:40.149 [2024-11-20T08:50:11.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.149 [2024-11-20T08:50:11.064Z] =================================================================================================================== 00:16:40.149 [2024-11-20T08:50:11.064Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.149 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:40.149 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:40.149 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72592' 00:16:40.149 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72592 00:16:40.149 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72592 00:16:40.407 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72560 00:16:40.407 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72560 ']' 00:16:40.407 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72560 00:16:40.407 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:40.407 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.407 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72560 00:16:40.407 killing process with pid 72560 00:16:40.407 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:40.407 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:40.407 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72560' 00:16:40.407 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72560 00:16:40.407 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72560 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72726 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72726 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72726 ']' 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.665 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.665 [2024-11-20 08:50:11.478586] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:40.665 [2024-11-20 08:50:11.478871] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.922 [2024-11-20 08:50:11.630549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.922 [2024-11-20 08:50:11.719175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.922 [2024-11-20 08:50:11.719512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.922 [2024-11-20 08:50:11.719767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.922 [2024-11-20 08:50:11.719929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.922 [2024-11-20 08:50:11.719945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.923 [2024-11-20 08:50:11.720444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.923 [2024-11-20 08:50:11.795890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:41.857 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.857 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:41.857 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:41.857 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:41.857 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:41.857 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.857 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.YtGfagDfoA 00:16:41.857 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YtGfagDfoA 00:16:41.857 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:41.857 [2024-11-20 08:50:12.762376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.116 08:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:42.375 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:42.633 [2024-11-20 08:50:13.310527] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:42.633 [2024-11-20 08:50:13.310844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:42.633 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:42.891 malloc0 00:16:42.891 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:43.150 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YtGfagDfoA 00:16:43.415 08:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:43.695 08:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72787 00:16:43.695 08:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:43.695 08:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.695 08:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72787 /var/tmp/bdevperf.sock 00:16:43.695 08:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72787 ']' 00:16:43.695 08:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.695 08:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.695 08:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.695 08:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.695 08:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.695 [2024-11-20 08:50:14.516204] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:43.695 [2024-11-20 08:50:14.516293] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72787 ] 00:16:43.954 [2024-11-20 08:50:14.668956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.954 [2024-11-20 08:50:14.751294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.954 [2024-11-20 08:50:14.827225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:44.888 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.888 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:44.888 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YtGfagDfoA 00:16:45.147 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:45.147 [2024-11-20 08:50:16.038915] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:45.405 nvme0n1 00:16:45.405 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:45.405 Running I/O for 1 seconds... 00:16:46.782 4108.00 IOPS, 16.05 MiB/s 00:16:46.782 Latency(us) 00:16:46.782 [2024-11-20T08:50:17.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.782 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:46.782 Verification LBA range: start 0x0 length 0x2000 00:16:46.782 nvme0n1 : 1.02 4169.87 16.29 0.00 0.00 30406.44 6136.55 23592.96 00:16:46.782 [2024-11-20T08:50:17.697Z] =================================================================================================================== 00:16:46.782 [2024-11-20T08:50:17.697Z] Total : 4169.87 16.29 0.00 0.00 30406.44 6136.55 23592.96 00:16:46.782 { 00:16:46.782 "results": [ 00:16:46.782 { 00:16:46.782 "job": "nvme0n1", 00:16:46.782 "core_mask": "0x2", 00:16:46.782 "workload": "verify", 00:16:46.782 "status": "finished", 00:16:46.782 "verify_range": { 00:16:46.782 "start": 0, 00:16:46.782 "length": 8192 00:16:46.782 }, 00:16:46.782 "queue_depth": 128, 00:16:46.782 "io_size": 4096, 00:16:46.782 "runtime": 1.0161, 00:16:46.782 "iops": 4169.865170750911, 00:16:46.782 "mibps": 16.288535823245745, 00:16:46.782 "io_failed": 0, 00:16:46.782 "io_timeout": 0, 00:16:46.782 "avg_latency_us": 30406.439410388997, 00:16:46.782 "min_latency_us": 6136.552727272728, 00:16:46.782 "max_latency_us": 23592.96 00:16:46.782 } 00:16:46.782 ], 00:16:46.782 "core_count": 1 00:16:46.782 } 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72787 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72787 ']' 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72787 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72787 00:16:46.782 killing process with pid 72787 00:16:46.782 Received shutdown signal, test time was about 1.000000 seconds 00:16:46.782 00:16:46.782 Latency(us) 00:16:46.782 [2024-11-20T08:50:17.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.782 [2024-11-20T08:50:17.697Z] =================================================================================================================== 00:16:46.782 [2024-11-20T08:50:17.697Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72787' 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72787 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72787 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72726 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72726 ']' 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72726 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72726 00:16:46.782 killing process with pid 72726 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72726' 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72726 00:16:46.782 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72726 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72845 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72845 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72845 ']' 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.348 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.348 [2024-11-20 08:50:18.034219] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:47.348 [2024-11-20 08:50:18.034349] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.348 [2024-11-20 08:50:18.183898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.348 [2024-11-20 08:50:18.245687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.348 [2024-11-20 08:50:18.245759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.348 [2024-11-20 08:50:18.245787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.348 [2024-11-20 08:50:18.245796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.349 [2024-11-20 08:50:18.245803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.349 [2024-11-20 08:50:18.246361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.608 [2024-11-20 08:50:18.317897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.608 [2024-11-20 08:50:18.442236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.608 malloc0 00:16:47.608 [2024-11-20 08:50:18.476329] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:47.608 [2024-11-20 08:50:18.476637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:47.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72864 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72864 /var/tmp/bdevperf.sock 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72864 ']' 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.608 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.866 [2024-11-20 08:50:18.564854] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:47.866 [2024-11-20 08:50:18.565219] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72864 ] 00:16:47.866 [2024-11-20 08:50:18.711710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.125 [2024-11-20 08:50:18.815428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.125 [2024-11-20 08:50:18.895446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:48.125 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.125 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:48.125 08:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YtGfagDfoA 00:16:48.386 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:48.643 [2024-11-20 08:50:19.486171] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:48.901 nvme0n1 00:16:48.901 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:48.901 Running I/O for 1 seconds... 00:16:49.834 4096.00 IOPS, 16.00 MiB/s 00:16:49.834 Latency(us) 00:16:49.834 [2024-11-20T08:50:20.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.834 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:49.834 Verification LBA range: start 0x0 length 0x2000 00:16:49.834 nvme0n1 : 1.02 4128.00 16.13 0.00 0.00 30664.79 7179.17 19422.49 00:16:49.834 [2024-11-20T08:50:20.749Z] =================================================================================================================== 00:16:49.834 [2024-11-20T08:50:20.749Z] Total : 4128.00 16.13 0.00 0.00 30664.79 7179.17 19422.49 00:16:49.834 { 00:16:49.834 "results": [ 00:16:49.834 { 00:16:49.834 "job": "nvme0n1", 00:16:49.834 "core_mask": "0x2", 00:16:49.834 "workload": "verify", 00:16:49.834 "status": "finished", 00:16:49.834 "verify_range": { 00:16:49.834 "start": 0, 00:16:49.834 "length": 8192 00:16:49.834 }, 00:16:49.834 "queue_depth": 128, 00:16:49.834 "io_size": 4096, 00:16:49.834 "runtime": 1.023255, 00:16:49.834 "iops": 4128.0032836389755, 00:16:49.834 "mibps": 16.125012826714748, 00:16:49.834 "io_failed": 0, 00:16:49.834 "io_timeout": 0, 00:16:49.834 "avg_latency_us": 30664.79426997245, 00:16:49.834 "min_latency_us": 7179.170909090909, 00:16:49.834 "max_latency_us": 19422.487272727274 00:16:49.834 } 00:16:49.834 ], 00:16:49.834 "core_count": 1 00:16:49.834 } 00:16:50.092 08:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:16:50.092 08:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.092 08:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.092 08:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.092 08:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:16:50.092 "subsystems": [ 00:16:50.092 { 00:16:50.092 "subsystem": "keyring", 00:16:50.092 "config": [ 00:16:50.092 { 00:16:50.092 "method": "keyring_file_add_key", 00:16:50.092 "params": { 00:16:50.092 "name": "key0", 00:16:50.092 "path": "/tmp/tmp.YtGfagDfoA" 00:16:50.092 } 00:16:50.092 } 00:16:50.092 ] 00:16:50.092 }, 00:16:50.092 { 00:16:50.092 "subsystem": "iobuf", 00:16:50.092 "config": [ 00:16:50.092 { 00:16:50.092 "method": "iobuf_set_options", 00:16:50.092 "params": { 00:16:50.092 "small_pool_count": 8192, 00:16:50.092 "large_pool_count": 1024, 00:16:50.092 "small_bufsize": 8192, 00:16:50.092 "large_bufsize": 135168, 00:16:50.092 "enable_numa": false 00:16:50.092 } 00:16:50.092 } 00:16:50.092 ] 00:16:50.092 }, 00:16:50.092 { 00:16:50.092 "subsystem": "sock", 00:16:50.092 "config": [ 00:16:50.092 { 00:16:50.092 "method": "sock_set_default_impl", 00:16:50.092 "params": { 00:16:50.092 "impl_name": "uring" 00:16:50.092 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "sock_impl_set_options", 00:16:50.093 "params": { 00:16:50.093 "impl_name": "ssl", 00:16:50.093 "recv_buf_size": 4096, 00:16:50.093 "send_buf_size": 4096, 00:16:50.093 "enable_recv_pipe": true, 00:16:50.093 "enable_quickack": false, 00:16:50.093 "enable_placement_id": 0, 00:16:50.093 "enable_zerocopy_send_server": true, 00:16:50.093 "enable_zerocopy_send_client": false, 00:16:50.093 "zerocopy_threshold": 0, 00:16:50.093 "tls_version": 0, 00:16:50.093 "enable_ktls": false 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "sock_impl_set_options", 00:16:50.093 "params": { 00:16:50.093 "impl_name": "posix", 00:16:50.093 "recv_buf_size": 2097152, 00:16:50.093 "send_buf_size": 2097152, 00:16:50.093 "enable_recv_pipe": true, 00:16:50.093 "enable_quickack": false, 00:16:50.093 "enable_placement_id": 0, 00:16:50.093 "enable_zerocopy_send_server": true, 00:16:50.093 "enable_zerocopy_send_client": false, 00:16:50.093 "zerocopy_threshold": 0, 00:16:50.093 "tls_version": 0, 00:16:50.093 "enable_ktls": false 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "sock_impl_set_options", 00:16:50.093 "params": { 00:16:50.093 "impl_name": "uring", 00:16:50.093 "recv_buf_size": 2097152, 00:16:50.093 "send_buf_size": 2097152, 00:16:50.093 "enable_recv_pipe": true, 00:16:50.093 "enable_quickack": false, 00:16:50.093 "enable_placement_id": 0, 00:16:50.093 "enable_zerocopy_send_server": false, 00:16:50.093 "enable_zerocopy_send_client": false, 00:16:50.093 "zerocopy_threshold": 0, 00:16:50.093 "tls_version": 0, 00:16:50.093 "enable_ktls": false 00:16:50.093 } 00:16:50.093 } 00:16:50.093 ] 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "subsystem": "vmd", 00:16:50.093 "config": [] 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "subsystem": "accel", 00:16:50.093 "config": [ 00:16:50.093 { 00:16:50.093 "method": "accel_set_options", 00:16:50.093 "params": { 00:16:50.093 "small_cache_size": 128, 00:16:50.093 "large_cache_size": 16, 00:16:50.093 "task_count": 2048, 00:16:50.093 "sequence_count": 2048, 00:16:50.093 "buf_count": 2048 00:16:50.093 } 00:16:50.093 } 00:16:50.093 ] 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "subsystem": "bdev", 00:16:50.093 "config": [ 00:16:50.093 { 00:16:50.093 "method": "bdev_set_options", 00:16:50.093 "params": { 00:16:50.093 "bdev_io_pool_size": 65535, 00:16:50.093 "bdev_io_cache_size": 256, 00:16:50.093 "bdev_auto_examine": true, 00:16:50.093 "iobuf_small_cache_size": 128, 00:16:50.093 "iobuf_large_cache_size": 16 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "bdev_raid_set_options", 00:16:50.093 "params": { 00:16:50.093 "process_window_size_kb": 1024, 00:16:50.093 "process_max_bandwidth_mb_sec": 0 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "bdev_iscsi_set_options", 00:16:50.093 "params": { 00:16:50.093 "timeout_sec": 30 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "bdev_nvme_set_options", 00:16:50.093 "params": { 00:16:50.093 "action_on_timeout": "none", 00:16:50.093 "timeout_us": 0, 00:16:50.093 "timeout_admin_us": 0, 00:16:50.093 "keep_alive_timeout_ms": 10000, 00:16:50.093 "arbitration_burst": 0, 00:16:50.093 "low_priority_weight": 0, 00:16:50.093 "medium_priority_weight": 0, 00:16:50.093 "high_priority_weight": 0, 00:16:50.093 "nvme_adminq_poll_period_us": 10000, 00:16:50.093 "nvme_ioq_poll_period_us": 0, 00:16:50.093 "io_queue_requests": 0, 00:16:50.093 "delay_cmd_submit": true, 00:16:50.093 "transport_retry_count": 4, 00:16:50.093 "bdev_retry_count": 3, 00:16:50.093 "transport_ack_timeout": 0, 00:16:50.093 "ctrlr_loss_timeout_sec": 0, 00:16:50.093 "reconnect_delay_sec": 0, 00:16:50.093 "fast_io_fail_timeout_sec": 0, 00:16:50.093 "disable_auto_failback": false, 00:16:50.093 "generate_uuids": false, 00:16:50.093 "transport_tos": 0, 00:16:50.093 "nvme_error_stat": false, 00:16:50.093 "rdma_srq_size": 0, 00:16:50.093 "io_path_stat": false, 00:16:50.093 "allow_accel_sequence": false, 00:16:50.093 "rdma_max_cq_size": 0, 00:16:50.093 "rdma_cm_event_timeout_ms": 0, 00:16:50.093 "dhchap_digests": [ 00:16:50.093 "sha256", 00:16:50.093 "sha384", 00:16:50.093 "sha512" 00:16:50.093 ], 00:16:50.093 "dhchap_dhgroups": [ 00:16:50.093 "null", 00:16:50.093 "ffdhe2048", 00:16:50.093 "ffdhe3072", 00:16:50.093 "ffdhe4096", 00:16:50.093 "ffdhe6144", 00:16:50.093 "ffdhe8192" 00:16:50.093 ] 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "bdev_nvme_set_hotplug", 00:16:50.093 "params": { 00:16:50.093 "period_us": 100000, 00:16:50.093 "enable": false 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "bdev_malloc_create", 00:16:50.093 "params": { 00:16:50.093 "name": "malloc0", 00:16:50.093 "num_blocks": 8192, 00:16:50.093 "block_size": 4096, 00:16:50.093 "physical_block_size": 4096, 00:16:50.093 "uuid": "9afb6e39-9cd2-49dc-ad58-5b8027fe0857", 00:16:50.093 "optimal_io_boundary": 0, 00:16:50.093 "md_size": 0, 00:16:50.093 "dif_type": 0, 00:16:50.093 "dif_is_head_of_md": false, 00:16:50.093 "dif_pi_format": 0 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "bdev_wait_for_examine" 00:16:50.093 } 00:16:50.093 ] 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "subsystem": "nbd", 00:16:50.093 "config": [] 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "subsystem": "scheduler", 00:16:50.093 "config": [ 00:16:50.093 { 00:16:50.093 "method": "framework_set_scheduler", 00:16:50.093 "params": { 00:16:50.093 "name": "static" 00:16:50.093 } 00:16:50.093 } 00:16:50.093 ] 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "subsystem": "nvmf", 00:16:50.093 "config": [ 00:16:50.093 { 00:16:50.093 "method": "nvmf_set_config", 00:16:50.093 "params": { 00:16:50.093 "discovery_filter": "match_any", 00:16:50.093 "admin_cmd_passthru": { 00:16:50.093 "identify_ctrlr": false 00:16:50.093 }, 00:16:50.093 "dhchap_digests": [ 00:16:50.093 "sha256", 00:16:50.093 "sha384", 00:16:50.093 "sha512" 00:16:50.093 ], 00:16:50.093 "dhchap_dhgroups": [ 00:16:50.093 "null", 00:16:50.093 "ffdhe2048", 00:16:50.093 "ffdhe3072", 00:16:50.093 "ffdhe4096", 00:16:50.093 "ffdhe6144", 00:16:50.093 "ffdhe8192" 00:16:50.093 ] 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "nvmf_set_max_subsystems", 00:16:50.093 "params": { 00:16:50.093 "max_subsystems": 1024 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "nvmf_set_crdt", 00:16:50.093 "params": { 00:16:50.093 "crdt1": 0, 00:16:50.093 "crdt2": 0, 00:16:50.093 "crdt3": 0 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "nvmf_create_transport", 00:16:50.093 "params": { 00:16:50.093 "trtype": "TCP", 00:16:50.093 "max_queue_depth": 128, 00:16:50.093 "max_io_qpairs_per_ctrlr": 127, 00:16:50.093 "in_capsule_data_size": 4096, 00:16:50.093 "max_io_size": 131072, 00:16:50.093 "io_unit_size": 131072, 00:16:50.093 "max_aq_depth": 128, 00:16:50.093 "num_shared_buffers": 511, 00:16:50.093 "buf_cache_size": 4294967295, 00:16:50.093 "dif_insert_or_strip": false, 00:16:50.093 "zcopy": false, 00:16:50.093 "c2h_success": false, 00:16:50.093 "sock_priority": 0, 00:16:50.093 "abort_timeout_sec": 1, 00:16:50.093 "ack_timeout": 0, 00:16:50.093 "data_wr_pool_size": 0 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "nvmf_create_subsystem", 00:16:50.093 "params": { 00:16:50.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.093 "allow_any_host": false, 00:16:50.093 "serial_number": "00000000000000000000", 00:16:50.093 "model_number": "SPDK bdev Controller", 00:16:50.093 "max_namespaces": 32, 00:16:50.093 "min_cntlid": 1, 00:16:50.093 "max_cntlid": 65519, 00:16:50.093 "ana_reporting": false 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "nvmf_subsystem_add_host", 00:16:50.093 "params": { 00:16:50.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.093 "host": "nqn.2016-06.io.spdk:host1", 00:16:50.093 "psk": "key0" 00:16:50.093 } 00:16:50.093 }, 00:16:50.093 { 00:16:50.093 "method": "nvmf_subsystem_add_ns", 00:16:50.093 "params": { 00:16:50.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.093 "namespace": { 00:16:50.093 "nsid": 1, 00:16:50.093 "bdev_name": "malloc0", 00:16:50.094 "nguid": "9AFB6E399CD249DCAD585B8027FE0857", 00:16:50.094 "uuid": "9afb6e39-9cd2-49dc-ad58-5b8027fe0857", 00:16:50.094 "no_auto_visible": false 00:16:50.094 } 00:16:50.094 } 00:16:50.094 }, 00:16:50.094 { 00:16:50.094 "method": "nvmf_subsystem_add_listener", 00:16:50.094 "params": { 00:16:50.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.094 "listen_address": { 00:16:50.094 "trtype": "TCP", 00:16:50.094 "adrfam": "IPv4", 00:16:50.094 "traddr": "10.0.0.3", 00:16:50.094 "trsvcid": "4420" 00:16:50.094 }, 00:16:50.094 "secure_channel": false, 00:16:50.094 "sock_impl": "ssl" 00:16:50.094 } 00:16:50.094 } 00:16:50.094 ] 00:16:50.094 } 00:16:50.094 ] 00:16:50.094 }' 00:16:50.094 08:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:50.351 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:16:50.351 "subsystems": [ 00:16:50.351 { 00:16:50.351 "subsystem": "keyring", 00:16:50.351 "config": [ 00:16:50.351 { 00:16:50.351 "method": "keyring_file_add_key", 00:16:50.351 "params": { 00:16:50.351 "name": "key0", 00:16:50.351 "path": "/tmp/tmp.YtGfagDfoA" 00:16:50.351 } 00:16:50.351 } 00:16:50.351 ] 00:16:50.351 }, 00:16:50.351 { 00:16:50.351 "subsystem": "iobuf", 00:16:50.351 "config": [ 00:16:50.351 { 00:16:50.351 "method": "iobuf_set_options", 00:16:50.351 "params": { 00:16:50.351 "small_pool_count": 8192, 00:16:50.351 "large_pool_count": 1024, 00:16:50.351 "small_bufsize": 8192, 00:16:50.351 "large_bufsize": 135168, 00:16:50.351 "enable_numa": false 00:16:50.351 } 00:16:50.351 } 00:16:50.351 ] 00:16:50.351 }, 00:16:50.351 { 00:16:50.351 "subsystem": "sock", 00:16:50.351 "config": [ 00:16:50.351 { 00:16:50.351 "method": "sock_set_default_impl", 00:16:50.351 "params": { 00:16:50.351 "impl_name": "uring" 00:16:50.351 } 00:16:50.351 }, 00:16:50.351 { 00:16:50.351 "method": "sock_impl_set_options", 00:16:50.351 "params": { 00:16:50.351 "impl_name": "ssl", 00:16:50.351 "recv_buf_size": 4096, 00:16:50.351 "send_buf_size": 4096, 00:16:50.351 "enable_recv_pipe": true, 00:16:50.351 "enable_quickack": false, 00:16:50.351 "enable_placement_id": 0, 00:16:50.351 "enable_zerocopy_send_server": true, 00:16:50.351 "enable_zerocopy_send_client": false, 00:16:50.351 "zerocopy_threshold": 0, 00:16:50.351 "tls_version": 0, 00:16:50.351 "enable_ktls": false 00:16:50.352 } 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "method": "sock_impl_set_options", 00:16:50.352 "params": { 00:16:50.352 "impl_name": "posix", 00:16:50.352 "recv_buf_size": 2097152, 00:16:50.352 "send_buf_size": 2097152, 00:16:50.352 "enable_recv_pipe": true, 00:16:50.352 "enable_quickack": false, 00:16:50.352 "enable_placement_id": 0, 00:16:50.352 "enable_zerocopy_send_server": true, 00:16:50.352 "enable_zerocopy_send_client": false, 00:16:50.352 "zerocopy_threshold": 0, 00:16:50.352 "tls_version": 0, 00:16:50.352 "enable_ktls": false 00:16:50.352 } 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "method": "sock_impl_set_options", 00:16:50.352 "params": { 00:16:50.352 "impl_name": "uring", 00:16:50.352 "recv_buf_size": 2097152, 00:16:50.352 "send_buf_size": 2097152, 00:16:50.352 "enable_recv_pipe": true, 00:16:50.352 "enable_quickack": false, 00:16:50.352 "enable_placement_id": 0, 00:16:50.352 "enable_zerocopy_send_server": false, 00:16:50.352 "enable_zerocopy_send_client": false, 00:16:50.352 "zerocopy_threshold": 0, 00:16:50.352 "tls_version": 0, 00:16:50.352 "enable_ktls": false 00:16:50.352 } 00:16:50.352 } 00:16:50.352 ] 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "subsystem": "vmd", 00:16:50.352 "config": [] 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "subsystem": "accel", 00:16:50.352 "config": [ 00:16:50.352 { 00:16:50.352 "method": "accel_set_options", 00:16:50.352 "params": { 00:16:50.352 "small_cache_size": 128, 00:16:50.352 "large_cache_size": 16, 00:16:50.352 "task_count": 2048, 00:16:50.352 "sequence_count": 2048, 00:16:50.352 "buf_count": 2048 00:16:50.352 } 00:16:50.352 } 00:16:50.352 ] 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "subsystem": "bdev", 00:16:50.352 "config": [ 00:16:50.352 { 00:16:50.352 "method": "bdev_set_options", 00:16:50.352 "params": { 00:16:50.352 "bdev_io_pool_size": 65535, 00:16:50.352 "bdev_io_cache_size": 256, 00:16:50.352 "bdev_auto_examine": true, 00:16:50.352 "iobuf_small_cache_size": 128, 00:16:50.352 "iobuf_large_cache_size": 16 00:16:50.352 } 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "method": "bdev_raid_set_options", 00:16:50.352 "params": { 00:16:50.352 "process_window_size_kb": 1024, 00:16:50.352 "process_max_bandwidth_mb_sec": 0 00:16:50.352 } 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "method": "bdev_iscsi_set_options", 00:16:50.352 "params": { 00:16:50.352 "timeout_sec": 30 00:16:50.352 } 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "method": "bdev_nvme_set_options", 00:16:50.352 "params": { 00:16:50.352 "action_on_timeout": "none", 00:16:50.352 "timeout_us": 0, 00:16:50.352 "timeout_admin_us": 0, 00:16:50.352 "keep_alive_timeout_ms": 10000, 00:16:50.352 "arbitration_burst": 0, 00:16:50.352 "low_priority_weight": 0, 00:16:50.352 "medium_priority_weight": 0, 00:16:50.352 "high_priority_weight": 0, 00:16:50.352 "nvme_adminq_poll_period_us": 10000, 00:16:50.352 "nvme_ioq_poll_period_us": 0, 00:16:50.352 "io_queue_requests": 512, 00:16:50.352 "delay_cmd_submit": true, 00:16:50.352 "transport_retry_count": 4, 00:16:50.352 "bdev_retry_count": 3, 00:16:50.352 "transport_ack_timeout": 0, 00:16:50.352 "ctrlr_loss_timeout_sec": 0, 00:16:50.352 "reconnect_delay_sec": 0, 00:16:50.352 "fast_io_fail_timeout_sec": 0, 00:16:50.352 "disable_auto_failback": false, 00:16:50.352 "generate_uuids": false, 00:16:50.352 "transport_tos": 0, 00:16:50.352 "nvme_error_stat": false, 00:16:50.352 "rdma_srq_size": 0, 00:16:50.352 "io_path_stat": false, 00:16:50.352 "allow_accel_sequence": false, 00:16:50.352 "rdma_max_cq_size": 0, 00:16:50.352 "rdma_cm_event_timeout_ms": 0, 00:16:50.352 "dhchap_digests": [ 00:16:50.352 "sha256", 00:16:50.352 "sha384", 00:16:50.352 "sha512" 00:16:50.352 ], 00:16:50.352 "dhchap_dhgroups": [ 00:16:50.352 "null", 00:16:50.352 "ffdhe2048", 00:16:50.352 "ffdhe3072", 00:16:50.352 "ffdhe4096", 00:16:50.352 "ffdhe6144", 00:16:50.352 "ffdhe8192" 00:16:50.352 ] 00:16:50.352 } 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "method": "bdev_nvme_attach_controller", 00:16:50.352 "params": { 00:16:50.352 "name": "nvme0", 00:16:50.352 "trtype": "TCP", 00:16:50.352 "adrfam": "IPv4", 00:16:50.352 "traddr": "10.0.0.3", 00:16:50.352 "trsvcid": "4420", 00:16:50.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.352 "prchk_reftag": false, 00:16:50.352 "prchk_guard": false, 00:16:50.352 "ctrlr_loss_timeout_sec": 0, 00:16:50.352 "reconnect_delay_sec": 0, 00:16:50.352 "fast_io_fail_timeout_sec": 0, 00:16:50.352 "psk": "key0", 00:16:50.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:50.352 "hdgst": false, 00:16:50.352 "ddgst": false, 00:16:50.352 "multipath": "multipath" 00:16:50.352 } 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "method": "bdev_nvme_set_hotplug", 00:16:50.352 "params": { 00:16:50.352 "period_us": 100000, 00:16:50.352 "enable": false 00:16:50.352 } 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "method": "bdev_enable_histogram", 00:16:50.352 "params": { 00:16:50.352 "name": "nvme0n1", 00:16:50.352 "enable": true 00:16:50.352 } 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "method": "bdev_wait_for_examine" 00:16:50.352 } 00:16:50.352 ] 00:16:50.352 }, 00:16:50.352 { 00:16:50.352 "subsystem": "nbd", 00:16:50.352 "config": [] 00:16:50.352 } 00:16:50.352 ] 00:16:50.352 }' 00:16:50.352 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72864 00:16:50.352 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72864 ']' 00:16:50.352 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72864 00:16:50.352 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:50.352 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.352 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72864 00:16:50.352 killing process with pid 72864 00:16:50.352 Received shutdown signal, test time was about 1.000000 seconds 00:16:50.352 00:16:50.352 Latency(us) 00:16:50.352 [2024-11-20T08:50:21.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.352 [2024-11-20T08:50:21.267Z] =================================================================================================================== 00:16:50.352 [2024-11-20T08:50:21.267Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.352 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:50.352 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:50.352 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72864' 00:16:50.352 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72864 00:16:50.352 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72864 00:16:50.610 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72845 00:16:50.610 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72845 ']' 00:16:50.610 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72845 00:16:50.868 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:50.868 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.868 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72845 00:16:50.868 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.868 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.868 killing process with pid 72845 00:16:50.868 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72845' 00:16:50.868 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72845 00:16:50.868 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72845 00:16:51.127 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:16:51.127 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.127 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:16:51.127 "subsystems": [ 00:16:51.127 { 00:16:51.127 "subsystem": "keyring", 00:16:51.127 "config": [ 00:16:51.127 { 00:16:51.127 "method": "keyring_file_add_key", 00:16:51.127 "params": { 00:16:51.127 "name": "key0", 00:16:51.127 "path": "/tmp/tmp.YtGfagDfoA" 00:16:51.127 } 00:16:51.127 } 00:16:51.127 ] 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "subsystem": "iobuf", 00:16:51.127 "config": [ 00:16:51.127 { 00:16:51.127 "method": "iobuf_set_options", 00:16:51.127 "params": { 00:16:51.127 "small_pool_count": 8192, 00:16:51.127 "large_pool_count": 1024, 00:16:51.127 "small_bufsize": 8192, 00:16:51.127 "large_bufsize": 135168, 00:16:51.127 "enable_numa": false 00:16:51.127 } 00:16:51.127 } 00:16:51.127 ] 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "subsystem": "sock", 00:16:51.127 "config": [ 00:16:51.127 { 00:16:51.127 "method": "sock_set_default_impl", 00:16:51.127 "params": { 00:16:51.127 "impl_name": "uring" 00:16:51.127 } 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "method": "sock_impl_set_options", 00:16:51.127 "params": { 00:16:51.127 "impl_name": "ssl", 00:16:51.127 "recv_buf_size": 4096, 00:16:51.127 "send_buf_size": 4096, 00:16:51.127 "enable_recv_pipe": true, 00:16:51.127 "enable_quickack": false, 00:16:51.127 "enable_placement_id": 0, 00:16:51.127 "enable_zerocopy_send_server": true, 00:16:51.127 "enable_zerocopy_send_client": false, 00:16:51.127 "zerocopy_threshold": 0, 00:16:51.127 "tls_version": 0, 00:16:51.127 "enable_ktls": false 00:16:51.127 } 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "method": "sock_impl_set_options", 00:16:51.127 "params": { 00:16:51.127 "impl_name": "posix", 00:16:51.127 "recv_buf_size": 2097152, 00:16:51.127 "send_buf_size": 2097152, 00:16:51.127 "enable_recv_pipe": true, 00:16:51.127 "enable_quickack": false, 00:16:51.127 "enable_placement_id": 0, 00:16:51.127 "enable_zerocopy_send_server": true, 00:16:51.127 "enable_zerocopy_send_client": false, 00:16:51.127 "zerocopy_threshold": 0, 00:16:51.127 "tls_version": 0, 00:16:51.127 "enable_ktls": false 00:16:51.127 } 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "method": "sock_impl_set_options", 00:16:51.127 "params": { 00:16:51.127 "impl_name": "uring", 00:16:51.127 "recv_buf_size": 2097152, 00:16:51.127 "send_buf_size": 2097152, 00:16:51.127 "enable_recv_pipe": true, 00:16:51.127 "enable_quickack": false, 00:16:51.127 "enable_placement_id": 0, 00:16:51.127 "enable_zerocopy_send_server": false, 00:16:51.127 "enable_zerocopy_send_client": false, 00:16:51.127 "zerocopy_threshold": 0, 00:16:51.127 "tls_version": 0, 00:16:51.127 "enable_ktls": false 00:16:51.127 } 00:16:51.127 } 00:16:51.127 ] 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "subsystem": "vmd", 00:16:51.127 "config": [] 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "subsystem": "accel", 00:16:51.127 "config": [ 00:16:51.127 { 00:16:51.127 "method": "accel_set_options", 00:16:51.127 "params": { 00:16:51.127 "small_cache_size": 128, 00:16:51.127 "large_cache_size": 16, 00:16:51.127 "task_count": 2048, 00:16:51.127 "sequence_count": 2048, 00:16:51.127 "buf_count": 2048 00:16:51.127 } 00:16:51.127 } 00:16:51.127 ] 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "subsystem": "bdev", 00:16:51.127 "config": [ 00:16:51.127 { 00:16:51.127 "method": "bdev_set_options", 00:16:51.127 "params": { 00:16:51.127 "bdev_io_pool_size": 65535, 00:16:51.127 "bdev_io_cache_size": 256, 00:16:51.127 "bdev_auto_examine": true, 00:16:51.127 "iobuf_small_cache_size": 128, 00:16:51.127 "iobuf_large_cache_size": 16 00:16:51.127 } 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "method": "bdev_raid_set_options", 00:16:51.127 "params": { 00:16:51.127 "process_window_size_kb": 1024, 00:16:51.127 "process_max_bandwidth_mb_sec": 0 00:16:51.127 } 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "method": "bdev_iscsi_set_options", 00:16:51.127 "params": { 00:16:51.127 "timeout_sec": 30 00:16:51.127 } 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "method": "bdev_nvme_set_options", 00:16:51.127 "params": { 00:16:51.127 "action_on_timeout": "none", 00:16:51.127 "timeout_us": 0, 00:16:51.127 "timeout_admin_us": 0, 00:16:51.127 "keep_alive_timeout_ms": 10000, 00:16:51.127 "arbitration_burst": 0, 00:16:51.127 "low_priority_weight": 0, 00:16:51.127 "medium_priority_weight": 0, 00:16:51.127 "high_priority_weight": 0, 00:16:51.127 "nvme_adminq_poll_period_us": 10000, 00:16:51.127 "nvme_ioq_poll_period_us": 0, 00:16:51.127 "io_queue_requests": 0, 00:16:51.127 "delay_cmd_submit": true, 00:16:51.127 "transport_retry_count": 4, 00:16:51.127 "bdev_retry_count": 3, 00:16:51.127 "transport_ack_timeout": 0, 00:16:51.127 "ctrlr_loss_timeout_sec": 0, 00:16:51.127 "reconnect_delay_sec": 0, 00:16:51.127 "fast_io_fail_timeout_sec": 0, 00:16:51.127 "disable_auto_failback": false, 00:16:51.127 "generate_uuids": false, 00:16:51.127 "transport_tos": 0, 00:16:51.127 "nvme_error_stat": false, 00:16:51.127 "rdma_srq_size": 0, 00:16:51.127 "io_path_stat": false, 00:16:51.127 "allow_accel_sequence": false, 00:16:51.127 "rdma_max_cq_size": 0, 00:16:51.127 "rdma_cm_event_timeout_ms": 0, 00:16:51.127 "dhchap_digests": [ 00:16:51.127 "sha256", 00:16:51.127 "sha384", 00:16:51.127 "sha512" 00:16:51.127 ], 00:16:51.127 "dhchap_dhgroups": [ 00:16:51.127 "null", 00:16:51.127 "ffdhe2048", 00:16:51.127 "ffdhe3072", 00:16:51.127 "ffdhe4096", 00:16:51.127 "ffdhe6144", 00:16:51.127 "ffdhe8192" 00:16:51.127 ] 00:16:51.127 } 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "method": "bdev_nvme_set_hotplug", 00:16:51.127 "params": { 00:16:51.127 "period_us": 100000, 00:16:51.127 "enable": false 00:16:51.127 } 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "method": "bdev_malloc_create", 00:16:51.127 "params": { 00:16:51.127 "name": "malloc0", 00:16:51.127 "num_blocks": 8192, 00:16:51.127 "block_size": 4096, 00:16:51.127 "physical_block_size": 4096, 00:16:51.127 "uuid": "9afb6e39-9cd2-49dc-ad58-5b8027fe0857", 00:16:51.127 "optimal_io_boundary": 0, 00:16:51.127 "md_size": 0, 00:16:51.127 "dif_type": 0, 00:16:51.127 "dif_is_head_of_md": false, 00:16:51.127 "dif_pi_format": 0 00:16:51.127 } 00:16:51.127 }, 00:16:51.127 { 00:16:51.127 "method": "bdev_wait_for_examine" 00:16:51.127 } 00:16:51.127 ] 00:16:51.127 }, 00:16:51.127 { 00:16:51.128 "subsystem": "nbd", 00:16:51.128 "config": [] 00:16:51.128 }, 00:16:51.128 { 00:16:51.128 "subsystem": "scheduler", 00:16:51.128 "config": [ 00:16:51.128 { 00:16:51.128 "method": "framework_set_scheduler", 00:16:51.128 "params": { 00:16:51.128 "name": "static" 00:16:51.128 } 00:16:51.128 } 00:16:51.128 ] 00:16:51.128 }, 00:16:51.128 { 00:16:51.128 "subsystem": "nvmf", 00:16:51.128 "config": [ 00:16:51.128 { 00:16:51.128 "method": "nvmf_set_config", 00:16:51.128 "params": { 00:16:51.128 "discovery_filter": "match_any", 00:16:51.128 "admin_cmd_passthru": { 00:16:51.128 "identify_ctrlr": false 00:16:51.128 }, 00:16:51.128 "dhchap_digests": [ 00:16:51.128 "sha256", 00:16:51.128 "sha384", 00:16:51.128 "sha512" 00:16:51.128 ], 00:16:51.128 "dhchap_dhgroups": [ 00:16:51.128 "null", 00:16:51.128 "ffdhe2048", 00:16:51.128 "ffdhe3072", 00:16:51.128 "ffdhe4096", 00:16:51.128 "ffdhe6144", 00:16:51.128 "ffdhe8192" 00:16:51.128 ] 00:16:51.128 } 00:16:51.128 }, 00:16:51.128 { 00:16:51.128 "method": "nvmf_set_max_subsystems", 00:16:51.128 "params": { 00:16:51.128 "max_subsystems": 1024 00:16:51.128 } 00:16:51.128 }, 00:16:51.128 { 00:16:51.128 "method": "nvmf_set_crdt", 00:16:51.128 "params": { 00:16:51.128 "crdt1": 0, 00:16:51.128 "crdt2": 0, 00:16:51.128 "crdt3": 0 00:16:51.128 } 00:16:51.128 }, 00:16:51.128 { 00:16:51.128 "method": "nvmf_create_transport", 00:16:51.128 "params": { 00:16:51.128 "trtype": "TCP", 00:16:51.128 "max_queue_depth": 128, 00:16:51.128 "max_io_qpairs_per_ctrlr": 127, 00:16:51.128 "in_capsule_data_size": 4096, 00:16:51.128 "max_io_size": 131072, 00:16:51.128 "io_unit_size": 131072, 00:16:51.128 "max_aq_depth": 128, 00:16:51.128 "num_shared_buffers": 511, 00:16:51.128 "buf_cache_size": 4294967295, 00:16:51.128 "dif_insert_or_strip": false, 00:16:51.128 "zcopy": false, 00:16:51.128 "c2h_success": false, 00:16:51.128 "sock_priority": 0, 00:16:51.128 "abort_timeout_sec": 1, 00:16:51.128 "ack_timeout": 0, 00:16:51.128 "data_wr_pool_size": 0 00:16:51.128 } 00:16:51.128 }, 00:16:51.128 { 00:16:51.128 "method": "nvmf_create_subsystem", 00:16:51.128 "params": { 00:16:51.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.128 "allow_any_host": false, 00:16:51.128 "serial_number": "00000000000000000000", 00:16:51.128 "model_number": "SPDK bdev Controller", 00:16:51.128 "max_namespaces": 32, 00:16:51.128 "min_cntlid": 1, 00:16:51.128 "max_cntlid": 65519, 00:16:51.128 "ana_reporting": false 00:16:51.128 } 00:16:51.128 }, 00:16:51.128 { 00:16:51.128 "method": "nvmf_subsystem_add_host", 00:16:51.128 "params": { 00:16:51.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.128 "host": "nqn.2016-06.io.spdk:host1", 00:16:51.128 "psk": "key0" 00:16:51.128 } 00:16:51.128 }, 00:16:51.128 { 00:16:51.128 "method": "nvmf_subsystem_add_ns", 00:16:51.128 "params": { 00:16:51.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.128 "namespace": { 00:16:51.128 "nsid": 1, 00:16:51.128 "bdev_name": "malloc0", 00:16:51.128 "nguid": "9AFB6E399CD249DCAD585B8027FE0857", 00:16:51.128 "uuid": "9afb6e39-9cd2-49dc-ad58-5b8027fe0857", 00:16:51.128 "no_auto_visible": false 00:16:51.128 } 00:16:51.128 } 00:16:51.128 }, 00:16:51.128 { 00:16:51.128 "method": "nvmf_subsystem_add_listener", 00:16:51.128 "params": { 00:16:51.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.128 "listen_address": { 00:16:51.128 "trtype": "TCP", 00:16:51.128 "adrfam": "IPv4", 00:16:51.128 "traddr": "10.0.0.3", 00:16:51.128 "trsvcid": "4420" 00:16:51.128 }, 00:16:51.128 "secure_channel": false, 00:16:51.128 "sock_impl": "ssl" 00:16:51.128 } 00:16:51.128 } 00:16:51.128 ] 00:16:51.128 } 00:16:51.128 ] 00:16:51.128 }' 00:16:51.128 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.128 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.128 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72923 00:16:51.128 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:51.128 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72923 00:16:51.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.128 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72923 ']' 00:16:51.128 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.128 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.128 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.128 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.128 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.128 [2024-11-20 08:50:21.906911] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:51.128 [2024-11-20 08:50:21.907267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.387 [2024-11-20 08:50:22.056677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.387 [2024-11-20 08:50:22.130095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.387 [2024-11-20 08:50:22.130488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.387 [2024-11-20 08:50:22.130633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.387 [2024-11-20 08:50:22.130648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.387 [2024-11-20 08:50:22.130656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.387 [2024-11-20 08:50:22.131200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.645 [2024-11-20 08:50:22.319153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:51.645 [2024-11-20 08:50:22.413285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.645 [2024-11-20 08:50:22.445195] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:51.645 [2024-11-20 08:50:22.445448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72955 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72955 /var/tmp/bdevperf.sock 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72955 ']' 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:52.211 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:16:52.211 "subsystems": [ 00:16:52.211 { 00:16:52.211 "subsystem": "keyring", 00:16:52.211 "config": [ 00:16:52.211 { 00:16:52.211 "method": "keyring_file_add_key", 00:16:52.211 "params": { 00:16:52.211 "name": "key0", 00:16:52.211 "path": "/tmp/tmp.YtGfagDfoA" 00:16:52.211 } 00:16:52.211 } 00:16:52.211 ] 00:16:52.211 }, 00:16:52.211 { 00:16:52.211 "subsystem": "iobuf", 00:16:52.211 "config": [ 00:16:52.211 { 00:16:52.211 "method": "iobuf_set_options", 00:16:52.211 "params": { 00:16:52.211 "small_pool_count": 8192, 00:16:52.211 "large_pool_count": 1024, 00:16:52.211 "small_bufsize": 8192, 00:16:52.211 "large_bufsize": 135168, 00:16:52.211 "enable_numa": false 00:16:52.211 } 00:16:52.211 } 00:16:52.211 ] 00:16:52.211 }, 00:16:52.211 { 00:16:52.211 "subsystem": "sock", 00:16:52.211 "config": [ 00:16:52.211 { 00:16:52.211 "method": "sock_set_default_impl", 00:16:52.211 "params": { 00:16:52.211 "impl_name": "uring" 00:16:52.211 } 00:16:52.211 }, 00:16:52.211 { 00:16:52.211 "method": "sock_impl_set_options", 00:16:52.211 "params": { 00:16:52.211 "impl_name": "ssl", 00:16:52.211 "recv_buf_size": 4096, 00:16:52.211 "send_buf_size": 4096, 00:16:52.211 "enable_recv_pipe": true, 00:16:52.211 "enable_quickack": false, 00:16:52.211 "enable_placement_id": 0, 00:16:52.211 "enable_zerocopy_send_server": true, 00:16:52.211 "enable_zerocopy_send_client": false, 00:16:52.211 "zerocopy_threshold": 0, 00:16:52.211 "tls_version": 0, 00:16:52.211 "enable_ktls": false 00:16:52.211 } 00:16:52.211 }, 00:16:52.211 { 00:16:52.211 "method": "sock_impl_set_options", 00:16:52.211 "params": { 00:16:52.211 "impl_name": "posix", 00:16:52.211 "recv_buf_size": 2097152, 00:16:52.211 "send_buf_size": 2097152, 00:16:52.211 "enable_recv_pipe": true, 00:16:52.211 "enable_quickack": false, 00:16:52.211 "enable_placement_id": 0, 00:16:52.211 "enable_zerocopy_send_server": true, 00:16:52.211 "enable_zerocopy_send_client": false, 00:16:52.211 "zerocopy_threshold": 0, 00:16:52.211 "tls_version": 0, 00:16:52.211 "enable_ktls": false 00:16:52.211 } 00:16:52.211 }, 00:16:52.211 { 00:16:52.211 "method": "sock_impl_set_options", 00:16:52.211 "params": { 00:16:52.211 "impl_name": "uring", 00:16:52.211 "recv_buf_size": 2097152, 00:16:52.211 "send_buf_size": 2097152, 00:16:52.211 "enable_recv_pipe": true, 00:16:52.211 "enable_quickack": false, 00:16:52.211 "enable_placement_id": 0, 00:16:52.211 "enable_zerocopy_send_server": false, 00:16:52.211 "enable_zerocopy_send_client": false, 00:16:52.211 "zerocopy_threshold": 0, 00:16:52.211 "tls_version": 0, 00:16:52.211 "enable_ktls": false 00:16:52.211 } 00:16:52.211 } 00:16:52.211 ] 00:16:52.211 }, 00:16:52.211 { 00:16:52.211 "subsystem": "vmd", 00:16:52.211 "config": [] 00:16:52.211 }, 00:16:52.211 { 00:16:52.211 "subsystem": "accel", 00:16:52.211 "config": [ 00:16:52.211 { 00:16:52.211 "method": "accel_set_options", 00:16:52.211 "params": { 00:16:52.211 "small_cache_size": 128, 00:16:52.211 "large_cache_size": 16, 00:16:52.211 "task_count": 2048, 00:16:52.211 "sequence_count": 2048, 00:16:52.211 "buf_count": 2048 00:16:52.211 } 00:16:52.211 } 00:16:52.211 ] 00:16:52.211 }, 00:16:52.211 { 00:16:52.211 "subsystem": "bdev", 00:16:52.211 "config": [ 00:16:52.212 { 00:16:52.212 "method": "bdev_set_options", 00:16:52.212 "params": { 00:16:52.212 "bdev_io_pool_size": 65535, 00:16:52.212 "bdev_io_cache_size": 256, 00:16:52.212 "bdev_auto_examine": true, 00:16:52.212 "iobuf_small_cache_size": 128, 00:16:52.212 "iobuf_large_cache_size": 16 00:16:52.212 } 00:16:52.212 }, 00:16:52.212 { 00:16:52.212 "method": "bdev_raid_set_options", 00:16:52.212 "params": { 00:16:52.212 "process_window_size_kb": 1024, 00:16:52.212 "process_max_bandwidth_mb_sec": 0 00:16:52.212 } 00:16:52.212 }, 00:16:52.212 { 00:16:52.212 "method": "bdev_iscsi_set_options", 00:16:52.212 "params": { 00:16:52.212 "timeout_sec": 30 00:16:52.212 } 00:16:52.212 }, 00:16:52.212 { 00:16:52.212 "method": "bdev_nvme_set_options", 00:16:52.212 "params": { 00:16:52.212 "action_on_timeout": "none", 00:16:52.212 "timeout_us": 0, 00:16:52.212 "timeout_admin_us": 0, 00:16:52.212 "keep_alive_timeout_ms": 10000, 00:16:52.212 "arbitration_burst": 0, 00:16:52.212 "low_priority_weight": 0, 00:16:52.212 "medium_priority_weight": 0, 00:16:52.212 "high_priority_weight": 0, 00:16:52.212 "nvme_adminq_poll_period_us": 10000, 00:16:52.212 "nvme_ioq_poll_period_us": 0, 00:16:52.212 "io_queue_requests": 512, 00:16:52.212 "delay_cmd_submit": true, 00:16:52.212 "transport_retry_count": 4, 00:16:52.212 "bdev_retry_count": 3, 00:16:52.212 "transport_ack_timeout": 0, 00:16:52.212 "ctrlr_loss_timeout_sec": 0, 00:16:52.212 "reconnect_delay_sec": 0, 00:16:52.212 "fast_io_fail_timeout_sec": 0, 00:16:52.212 "disable_auto_failback": false, 00:16:52.212 "generate_uuids": false, 00:16:52.212 "transport_tos": 0, 00:16:52.212 "nvme_error_stat": false, 00:16:52.212 "rdma_srq_size": 0, 00:16:52.212 "io_path_stat": false, 00:16:52.212 "allow_accel_sequence": false, 00:16:52.212 "rdma_max_cq_size": 0, 00:16:52.212 "rdma_cm_event_timeout_ms": 0, 00:16:52.212 "dhchap_digests": [ 00:16:52.212 "sha256", 00:16:52.212 "sha384", 00:16:52.212 "sha512" 00:16:52.212 ], 00:16:52.212 "dhchap_dhgroups": [ 00:16:52.212 "null", 00:16:52.212 "ffdhe2048", 00:16:52.212 "ffdhe3072", 00:16:52.212 "ffdhe4096", 00:16:52.212 "ffdhe6144", 00:16:52.212 "ffdhe8192" 00:16:52.212 ] 00:16:52.212 } 00:16:52.212 }, 00:16:52.212 { 00:16:52.212 "method": "bdev_nvme_attach_controller", 00:16:52.212 "params": { 00:16:52.212 "name": "nvme0", 00:16:52.212 "trtype": "TCP", 00:16:52.212 "adrfam": "IPv4", 00:16:52.212 "traddr": "10.0.0.3", 00:16:52.212 "trsvcid": "4420", 00:16:52.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.212 "prchk_reftag": false, 00:16:52.212 "prchk_guard": false, 00:16:52.212 "ctrlr_loss_timeout_sec": 0, 00:16:52.212 "reconnect_delay_sec": 0, 00:16:52.212 "fast_io_fail_timeout_sec": 0, 00:16:52.212 "psk": "key0", 00:16:52.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.212 "hdgst": false, 00:16:52.212 "ddgst": false, 00:16:52.212 "multipath": "multipath" 00:16:52.212 } 00:16:52.212 }, 00:16:52.212 { 00:16:52.212 "method": "bdev_nvme_set_hotplug", 00:16:52.212 "params": { 00:16:52.212 "period_us": 100000, 00:16:52.212 "enable": false 00:16:52.212 } 00:16:52.212 }, 00:16:52.212 { 00:16:52.212 "method": "bdev_enable_histogram", 00:16:52.212 "params": { 00:16:52.212 "name": "nvme0n1", 00:16:52.212 "enable": true 00:16:52.212 } 00:16:52.212 }, 00:16:52.212 { 00:16:52.212 "method": "bdev_wait_for_examine" 00:16:52.212 } 00:16:52.212 ] 00:16:52.212 }, 00:16:52.212 { 00:16:52.212 "subsystem": "nbd", 00:16:52.212 "config": [] 00:16:52.212 } 00:16:52.212 ] 00:16:52.212 }' 00:16:52.212 [2024-11-20 08:50:22.962557] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:52.212 [2024-11-20 08:50:22.962877] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72955 ] 00:16:52.212 [2024-11-20 08:50:23.112923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.470 [2024-11-20 08:50:23.185842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.470 [2024-11-20 08:50:23.341069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:52.729 [2024-11-20 08:50:23.399710] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.361 08:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.361 08:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:53.361 08:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:53.361 08:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:16:53.650 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.650 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.650 Running I/O for 1 seconds... 00:16:54.585 4071.00 IOPS, 15.90 MiB/s 00:16:54.585 Latency(us) 00:16:54.585 [2024-11-20T08:50:25.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.585 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:54.585 Verification LBA range: start 0x0 length 0x2000 00:16:54.585 nvme0n1 : 1.03 4086.82 15.96 0.00 0.00 30931.33 9532.51 22163.08 00:16:54.585 [2024-11-20T08:50:25.500Z] =================================================================================================================== 00:16:54.585 [2024-11-20T08:50:25.501Z] Total : 4086.82 15.96 0.00 0.00 30931.33 9532.51 22163.08 00:16:54.586 { 00:16:54.586 "results": [ 00:16:54.586 { 00:16:54.586 "job": "nvme0n1", 00:16:54.586 "core_mask": "0x2", 00:16:54.586 "workload": "verify", 00:16:54.586 "status": "finished", 00:16:54.586 "verify_range": { 00:16:54.586 "start": 0, 00:16:54.586 "length": 8192 00:16:54.586 }, 00:16:54.586 "queue_depth": 128, 00:16:54.586 "io_size": 4096, 00:16:54.586 "runtime": 1.027695, 00:16:54.586 "iops": 4086.8156408272885, 00:16:54.586 "mibps": 15.964123596981596, 00:16:54.586 "io_failed": 0, 00:16:54.586 "io_timeout": 0, 00:16:54.586 "avg_latency_us": 30931.33232207792, 00:16:54.586 "min_latency_us": 9532.50909090909, 00:16:54.586 "max_latency_us": 22163.083636363637 00:16:54.586 } 00:16:54.586 ], 00:16:54.586 "core_count": 1 00:16:54.586 } 00:16:54.586 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:16:54.586 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:16:54.586 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:54.586 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:16:54.586 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:16:54.586 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:54.586 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:54.586 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:54.586 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:54.586 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:54.586 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:54.586 nvmf_trace.0 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72955 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72955 ']' 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72955 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72955 00:16:54.844 killing process with pid 72955 00:16:54.844 Received shutdown signal, test time was about 1.000000 seconds 00:16:54.844 00:16:54.844 Latency(us) 00:16:54.844 [2024-11-20T08:50:25.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.844 [2024-11-20T08:50:25.759Z] =================================================================================================================== 00:16:54.844 [2024-11-20T08:50:25.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72955' 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72955 00:16:54.844 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72955 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.105 rmmod nvme_tcp 00:16:55.105 rmmod nvme_fabrics 00:16:55.105 rmmod nvme_keyring 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72923 ']' 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72923 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72923 ']' 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72923 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72923 00:16:55.105 killing process with pid 72923 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72923' 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72923 00:16:55.105 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72923 00:16:55.364 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:55.364 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:55.364 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:55.364 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:16:55.364 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:16:55.364 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:55.364 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:16:55.364 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:55.364 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:55.364 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.MFkTJxkMOM /tmp/tmp.l8dm8lbt1J /tmp/tmp.YtGfagDfoA 00:16:55.622 ************************************ 00:16:55.622 END TEST nvmf_tls 00:16:55.622 ************************************ 00:16:55.622 00:16:55.622 real 1m28.371s 00:16:55.622 user 2m23.758s 00:16:55.622 sys 0m28.046s 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.622 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:55.881 ************************************ 00:16:55.881 START TEST nvmf_fips 00:16:55.881 ************************************ 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:55.881 * Looking for test storage... 00:16:55.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:55.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.881 --rc genhtml_branch_coverage=1 00:16:55.881 --rc genhtml_function_coverage=1 00:16:55.881 --rc genhtml_legend=1 00:16:55.881 --rc geninfo_all_blocks=1 00:16:55.881 --rc geninfo_unexecuted_blocks=1 00:16:55.881 00:16:55.881 ' 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:55.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.881 --rc genhtml_branch_coverage=1 00:16:55.881 --rc genhtml_function_coverage=1 00:16:55.881 --rc genhtml_legend=1 00:16:55.881 --rc geninfo_all_blocks=1 00:16:55.881 --rc geninfo_unexecuted_blocks=1 00:16:55.881 00:16:55.881 ' 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:55.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.881 --rc genhtml_branch_coverage=1 00:16:55.881 --rc genhtml_function_coverage=1 00:16:55.881 --rc genhtml_legend=1 00:16:55.881 --rc geninfo_all_blocks=1 00:16:55.881 --rc geninfo_unexecuted_blocks=1 00:16:55.881 00:16:55.881 ' 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:55.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.881 --rc genhtml_branch_coverage=1 00:16:55.881 --rc genhtml_function_coverage=1 00:16:55.881 --rc genhtml_legend=1 00:16:55.881 --rc geninfo_all_blocks=1 00:16:55.881 --rc geninfo_unexecuted_blocks=1 00:16:55.881 00:16:55.881 ' 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.881 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:55.882 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:55.882 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.140 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:16:56.140 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:16:56.140 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:56.140 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:16:56.140 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:16:56.140 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.140 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:16:56.140 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:16:56.140 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:56.140 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:16:56.141 Error setting digest 00:16:56.141 40B2E873517F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:16:56.141 40B2E873517F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:56.141 Cannot find device "nvmf_init_br" 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:56.141 Cannot find device "nvmf_init_br2" 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:56.141 Cannot find device "nvmf_tgt_br" 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.141 Cannot find device "nvmf_tgt_br2" 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:56.141 Cannot find device "nvmf_init_br" 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:56.141 Cannot find device "nvmf_init_br2" 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:16:56.141 08:50:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:56.141 Cannot find device "nvmf_tgt_br" 00:16:56.141 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:16:56.141 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:56.141 Cannot find device "nvmf_tgt_br2" 00:16:56.141 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:16:56.141 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:56.141 Cannot find device "nvmf_br" 00:16:56.141 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:16:56.141 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:56.141 Cannot find device "nvmf_init_if" 00:16:56.141 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:16:56.141 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:56.141 Cannot find device "nvmf_init_if2" 00:16:56.141 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:16:56.141 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:56.400 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.400 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:16:56.400 00:16:56.400 --- 10.0.0.3 ping statistics --- 00:16:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.400 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:56.400 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:56.400 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:16:56.400 00:16:56.400 --- 10.0.0.4 ping statistics --- 00:16:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.400 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:56.400 00:16:56.400 --- 10.0.0.1 ping statistics --- 00:16:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.400 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:56.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:16:56.400 00:16:56.400 --- 10.0.0.2 ping statistics --- 00:16:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.400 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.400 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:56.401 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:56.658 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:16:56.658 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:56.658 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.658 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:56.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.658 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=73275 00:16:56.658 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:56.658 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 73275 00:16:56.658 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73275 ']' 00:16:56.658 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.658 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.658 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.659 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.659 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:56.659 [2024-11-20 08:50:27.453291] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:56.659 [2024-11-20 08:50:27.453616] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.917 [2024-11-20 08:50:27.598751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.917 [2024-11-20 08:50:27.679836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.917 [2024-11-20 08:50:27.680259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.917 [2024-11-20 08:50:27.680424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.917 [2024-11-20 08:50:27.680483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.917 [2024-11-20 08:50:27.680599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.917 [2024-11-20 08:50:27.681150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.917 [2024-11-20 08:50:27.756647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.pce 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.pce 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.pce 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.pce 00:16:57.851 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.851 [2024-11-20 08:50:28.742790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.851 [2024-11-20 08:50:28.758725] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:57.851 [2024-11-20 08:50:28.758979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:58.109 malloc0 00:16:58.109 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:58.109 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73314 00:16:58.109 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:58.109 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73314 /var/tmp/bdevperf.sock 00:16:58.109 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73314 ']' 00:16:58.109 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.109 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.109 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.109 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.109 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:58.109 [2024-11-20 08:50:28.898174] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:58.109 [2024-11-20 08:50:28.898265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73314 ] 00:16:58.366 [2024-11-20 08:50:29.047231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.366 [2024-11-20 08:50:29.116639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.366 [2024-11-20 08:50:29.194945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:58.366 08:50:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.366 08:50:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:58.367 08:50:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.pce 00:16:58.932 08:50:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:58.932 [2024-11-20 08:50:29.778282] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:59.190 TLSTESTn1 00:16:59.190 08:50:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:59.190 Running I/O for 10 seconds... 00:17:01.498 4014.00 IOPS, 15.68 MiB/s [2024-11-20T08:50:33.366Z] 4091.00 IOPS, 15.98 MiB/s [2024-11-20T08:50:34.299Z] 4088.00 IOPS, 15.97 MiB/s [2024-11-20T08:50:35.232Z] 4111.00 IOPS, 16.06 MiB/s [2024-11-20T08:50:36.164Z] 4112.00 IOPS, 16.06 MiB/s [2024-11-20T08:50:37.097Z] 4113.83 IOPS, 16.07 MiB/s [2024-11-20T08:50:38.029Z] 4116.71 IOPS, 16.08 MiB/s [2024-11-20T08:50:39.402Z] 4123.00 IOPS, 16.11 MiB/s [2024-11-20T08:50:40.017Z] 4123.44 IOPS, 16.11 MiB/s [2024-11-20T08:50:40.017Z] 4123.90 IOPS, 16.11 MiB/s 00:17:09.102 Latency(us) 00:17:09.102 [2024-11-20T08:50:40.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.102 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:09.102 Verification LBA range: start 0x0 length 0x2000 00:17:09.102 TLSTESTn1 : 10.02 4129.54 16.13 0.00 0.00 30938.58 5719.51 24427.05 00:17:09.102 [2024-11-20T08:50:40.017Z] =================================================================================================================== 00:17:09.102 [2024-11-20T08:50:40.017Z] Total : 4129.54 16.13 0.00 0.00 30938.58 5719.51 24427.05 00:17:09.102 { 00:17:09.102 "results": [ 00:17:09.102 { 00:17:09.102 "job": "TLSTESTn1", 00:17:09.102 "core_mask": "0x4", 00:17:09.102 "workload": "verify", 00:17:09.102 "status": "finished", 00:17:09.102 "verify_range": { 00:17:09.102 "start": 0, 00:17:09.102 "length": 8192 00:17:09.102 }, 00:17:09.102 "queue_depth": 128, 00:17:09.102 "io_size": 4096, 00:17:09.102 "runtime": 10.01637, 00:17:09.102 "iops": 4129.539943113124, 00:17:09.102 "mibps": 16.13101540278564, 00:17:09.102 "io_failed": 0, 00:17:09.102 "io_timeout": 0, 00:17:09.102 "avg_latency_us": 30938.583747178527, 00:17:09.102 "min_latency_us": 5719.505454545455, 00:17:09.102 "max_latency_us": 24427.054545454546 00:17:09.102 } 00:17:09.102 ], 00:17:09.102 "core_count": 1 00:17:09.102 } 00:17:09.102 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:09.102 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:09.102 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:09.102 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:09.102 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:09.360 nvmf_trace.0 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73314 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73314 ']' 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73314 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73314 00:17:09.360 killing process with pid 73314 00:17:09.360 Received shutdown signal, test time was about 10.000000 seconds 00:17:09.360 00:17:09.360 Latency(us) 00:17:09.360 [2024-11-20T08:50:40.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.360 [2024-11-20T08:50:40.275Z] =================================================================================================================== 00:17:09.360 [2024-11-20T08:50:40.275Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73314' 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73314 00:17:09.360 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73314 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:09.618 rmmod nvme_tcp 00:17:09.618 rmmod nvme_fabrics 00:17:09.618 rmmod nvme_keyring 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 73275 ']' 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 73275 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73275 ']' 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73275 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73275 00:17:09.618 killing process with pid 73275 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73275' 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73275 00:17:09.618 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73275 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.184 08:50:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.pce 00:17:10.184 ************************************ 00:17:10.184 END TEST nvmf_fips 00:17:10.184 ************************************ 00:17:10.184 00:17:10.184 real 0m14.489s 00:17:10.184 user 0m19.912s 00:17:10.184 sys 0m5.582s 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.184 08:50:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.443 ************************************ 00:17:10.443 START TEST nvmf_control_msg_list 00:17:10.443 ************************************ 00:17:10.443 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:10.443 * Looking for test storage... 00:17:10.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:10.443 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.443 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.443 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.444 --rc genhtml_branch_coverage=1 00:17:10.444 --rc genhtml_function_coverage=1 00:17:10.444 --rc genhtml_legend=1 00:17:10.444 --rc geninfo_all_blocks=1 00:17:10.444 --rc geninfo_unexecuted_blocks=1 00:17:10.444 00:17:10.444 ' 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.444 --rc genhtml_branch_coverage=1 00:17:10.444 --rc genhtml_function_coverage=1 00:17:10.444 --rc genhtml_legend=1 00:17:10.444 --rc geninfo_all_blocks=1 00:17:10.444 --rc geninfo_unexecuted_blocks=1 00:17:10.444 00:17:10.444 ' 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.444 --rc genhtml_branch_coverage=1 00:17:10.444 --rc genhtml_function_coverage=1 00:17:10.444 --rc genhtml_legend=1 00:17:10.444 --rc geninfo_all_blocks=1 00:17:10.444 --rc geninfo_unexecuted_blocks=1 00:17:10.444 00:17:10.444 ' 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.444 --rc genhtml_branch_coverage=1 00:17:10.444 --rc genhtml_function_coverage=1 00:17:10.444 --rc genhtml_legend=1 00:17:10.444 --rc geninfo_all_blocks=1 00:17:10.444 --rc geninfo_unexecuted_blocks=1 00:17:10.444 00:17:10.444 ' 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.444 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.444 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:10.445 Cannot find device "nvmf_init_br" 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:10.445 Cannot find device "nvmf_init_br2" 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:10.445 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:10.704 Cannot find device "nvmf_tgt_br" 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.704 Cannot find device "nvmf_tgt_br2" 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:10.704 Cannot find device "nvmf_init_br" 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:10.704 Cannot find device "nvmf_init_br2" 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:10.704 Cannot find device "nvmf_tgt_br" 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:10.704 Cannot find device "nvmf_tgt_br2" 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:10.704 Cannot find device "nvmf_br" 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:10.704 Cannot find device "nvmf_init_if" 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:10.704 Cannot find device "nvmf_init_if2" 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.704 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.704 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:10.704 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:10.963 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:10.963 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:17:10.963 00:17:10.963 --- 10.0.0.3 ping statistics --- 00:17:10.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.963 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:10.963 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:10.963 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:17:10.963 00:17:10.963 --- 10.0.0.4 ping statistics --- 00:17:10.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.963 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:10.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:10.963 00:17:10.963 --- 10.0.0.1 ping statistics --- 00:17:10.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.963 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:10.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:17:10.963 00:17:10.963 --- 10.0.0.2 ping statistics --- 00:17:10.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.963 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73697 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73697 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73697 ']' 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.963 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:10.963 [2024-11-20 08:50:41.823983] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:10.963 [2024-11-20 08:50:41.824099] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.222 [2024-11-20 08:50:41.976225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.222 [2024-11-20 08:50:42.048883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.222 [2024-11-20 08:50:42.048956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.222 [2024-11-20 08:50:42.048972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.222 [2024-11-20 08:50:42.048984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.222 [2024-11-20 08:50:42.048993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.222 [2024-11-20 08:50:42.049490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.222 [2024-11-20 08:50:42.125894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.481 [2024-11-20 08:50:42.256273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.481 Malloc0 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.481 [2024-11-20 08:50:42.307567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73726 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73727 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73728 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:11.481 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73726 00:17:11.739 [2024-11-20 08:50:42.502499] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:11.739 [2024-11-20 08:50:42.503031] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:11.739 [2024-11-20 08:50:42.503484] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:12.673 Initializing NVMe Controllers 00:17:12.673 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:12.673 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:12.673 Initialization complete. Launching workers. 00:17:12.673 ======================================================== 00:17:12.673 Latency(us) 00:17:12.674 Device Information : IOPS MiB/s Average min max 00:17:12.674 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3335.00 13.03 299.53 148.64 623.94 00:17:12.674 ======================================================== 00:17:12.674 Total : 3335.00 13.03 299.53 148.64 623.94 00:17:12.674 00:17:12.674 Initializing NVMe Controllers 00:17:12.674 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:12.674 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:12.674 Initialization complete. Launching workers. 00:17:12.674 ======================================================== 00:17:12.674 Latency(us) 00:17:12.674 Device Information : IOPS MiB/s Average min max 00:17:12.674 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3336.98 13.04 299.37 229.29 810.65 00:17:12.674 ======================================================== 00:17:12.674 Total : 3336.98 13.04 299.37 229.29 810.65 00:17:12.674 00:17:12.674 Initializing NVMe Controllers 00:17:12.674 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:12.674 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:12.674 Initialization complete. Launching workers. 00:17:12.674 ======================================================== 00:17:12.674 Latency(us) 00:17:12.674 Device Information : IOPS MiB/s Average min max 00:17:12.674 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3331.00 13.01 299.75 233.13 761.93 00:17:12.674 ======================================================== 00:17:12.674 Total : 3331.00 13.01 299.75 233.13 761.93 00:17:12.674 00:17:12.674 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73727 00:17:12.674 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73728 00:17:12.674 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:12.674 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:12.674 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:12.674 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:12.674 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:12.674 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:12.674 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:12.674 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:12.674 rmmod nvme_tcp 00:17:12.932 rmmod nvme_fabrics 00:17:12.932 rmmod nvme_keyring 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73697 ']' 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73697 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73697 ']' 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73697 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73697 00:17:12.932 killing process with pid 73697 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73697' 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73697 00:17:12.932 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73697 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:13.190 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:13.190 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.190 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:13.190 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:13.190 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:13.190 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:13.190 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:13.190 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:13.449 00:17:13.449 real 0m3.105s 00:17:13.449 user 0m4.855s 00:17:13.449 sys 0m1.426s 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 ************************************ 00:17:13.449 END TEST nvmf_control_msg_list 00:17:13.449 ************************************ 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:13.449 ************************************ 00:17:13.449 START TEST nvmf_wait_for_buf 00:17:13.449 ************************************ 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:13.449 * Looking for test storage... 00:17:13.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:13.449 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.711 --rc genhtml_branch_coverage=1 00:17:13.711 --rc genhtml_function_coverage=1 00:17:13.711 --rc genhtml_legend=1 00:17:13.711 --rc geninfo_all_blocks=1 00:17:13.711 --rc geninfo_unexecuted_blocks=1 00:17:13.711 00:17:13.711 ' 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.711 --rc genhtml_branch_coverage=1 00:17:13.711 --rc genhtml_function_coverage=1 00:17:13.711 --rc genhtml_legend=1 00:17:13.711 --rc geninfo_all_blocks=1 00:17:13.711 --rc geninfo_unexecuted_blocks=1 00:17:13.711 00:17:13.711 ' 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.711 --rc genhtml_branch_coverage=1 00:17:13.711 --rc genhtml_function_coverage=1 00:17:13.711 --rc genhtml_legend=1 00:17:13.711 --rc geninfo_all_blocks=1 00:17:13.711 --rc geninfo_unexecuted_blocks=1 00:17:13.711 00:17:13.711 ' 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:13.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.711 --rc genhtml_branch_coverage=1 00:17:13.711 --rc genhtml_function_coverage=1 00:17:13.711 --rc genhtml_legend=1 00:17:13.711 --rc geninfo_all_blocks=1 00:17:13.711 --rc geninfo_unexecuted_blocks=1 00:17:13.711 00:17:13.711 ' 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.711 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:13.712 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:13.712 Cannot find device "nvmf_init_br" 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:13.712 Cannot find device "nvmf_init_br2" 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:13.712 Cannot find device "nvmf_tgt_br" 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.712 Cannot find device "nvmf_tgt_br2" 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:13.712 Cannot find device "nvmf_init_br" 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:13.712 Cannot find device "nvmf_init_br2" 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:13.712 Cannot find device "nvmf_tgt_br" 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:13.712 Cannot find device "nvmf_tgt_br2" 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:13.712 Cannot find device "nvmf_br" 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:13.712 Cannot find device "nvmf_init_if" 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:13.712 Cannot find device "nvmf_init_if2" 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:13.712 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:13.980 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:13.980 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.980 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:17:13.980 00:17:13.980 --- 10.0.0.3 ping statistics --- 00:17:13.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.981 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:13.981 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:13.981 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:13.981 00:17:13.981 --- 10.0.0.4 ping statistics --- 00:17:13.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.981 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:13.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:17:13.981 00:17:13.981 --- 10.0.0.1 ping statistics --- 00:17:13.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.981 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:13.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:17:13.981 00:17:13.981 --- 10.0.0.2 ping statistics --- 00:17:13.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.981 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73964 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73964 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73964 ']' 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.981 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:14.239 [2024-11-20 08:50:44.926873] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:14.239 [2024-11-20 08:50:44.926997] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.239 [2024-11-20 08:50:45.079541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.498 [2024-11-20 08:50:45.164307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.498 [2024-11-20 08:50:45.164397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.498 [2024-11-20 08:50:45.164412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.498 [2024-11-20 08:50:45.164423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.498 [2024-11-20 08:50:45.164432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.498 [2024-11-20 08:50:45.165003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.432 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.432 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:17:15.432 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:15.432 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.432 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.432 [2024-11-20 08:50:46.098161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.432 Malloc0 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.432 [2024-11-20 08:50:46.180135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.432 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.433 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.433 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:15.433 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.433 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.433 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.433 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:15.433 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.433 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.433 [2024-11-20 08:50:46.208282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:15.433 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.433 08:50:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:15.691 [2024-11-20 08:50:46.425957] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:17.065 Initializing NVMe Controllers 00:17:17.065 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:17.065 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:17.065 Initialization complete. Launching workers. 00:17:17.065 ======================================================== 00:17:17.065 Latency(us) 00:17:17.065 Device Information : IOPS MiB/s Average min max 00:17:17.065 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.99 62.50 7999.75 7916.86 8141.64 00:17:17.065 ======================================================== 00:17:17.065 Total : 499.99 62.50 7999.75 7916.86 8141.64 00:17:17.065 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:17.065 rmmod nvme_tcp 00:17:17.065 rmmod nvme_fabrics 00:17:17.065 rmmod nvme_keyring 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73964 ']' 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73964 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73964 ']' 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73964 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73964 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.065 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.066 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73964' 00:17:17.066 killing process with pid 73964 00:17:17.066 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73964 00:17:17.066 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73964 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:17.324 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:17:17.584 00:17:17.584 real 0m4.169s 00:17:17.584 user 0m3.751s 00:17:17.584 sys 0m0.894s 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.584 ************************************ 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:17.584 END TEST nvmf_wait_for_buf 00:17:17.584 ************************************ 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:17.584 ************************************ 00:17:17.584 START TEST nvmf_nsid 00:17:17.584 ************************************ 00:17:17.584 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:17.844 * Looking for test storage... 00:17:17.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:17.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.844 --rc genhtml_branch_coverage=1 00:17:17.844 --rc genhtml_function_coverage=1 00:17:17.844 --rc genhtml_legend=1 00:17:17.844 --rc geninfo_all_blocks=1 00:17:17.844 --rc geninfo_unexecuted_blocks=1 00:17:17.844 00:17:17.844 ' 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:17.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.844 --rc genhtml_branch_coverage=1 00:17:17.844 --rc genhtml_function_coverage=1 00:17:17.844 --rc genhtml_legend=1 00:17:17.844 --rc geninfo_all_blocks=1 00:17:17.844 --rc geninfo_unexecuted_blocks=1 00:17:17.844 00:17:17.844 ' 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:17.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.844 --rc genhtml_branch_coverage=1 00:17:17.844 --rc genhtml_function_coverage=1 00:17:17.844 --rc genhtml_legend=1 00:17:17.844 --rc geninfo_all_blocks=1 00:17:17.844 --rc geninfo_unexecuted_blocks=1 00:17:17.844 00:17:17.844 ' 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:17.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.844 --rc genhtml_branch_coverage=1 00:17:17.844 --rc genhtml_function_coverage=1 00:17:17.844 --rc genhtml_legend=1 00:17:17.844 --rc geninfo_all_blocks=1 00:17:17.844 --rc geninfo_unexecuted_blocks=1 00:17:17.844 00:17:17.844 ' 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.844 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:17.845 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:17.845 Cannot find device "nvmf_init_br" 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:17.845 Cannot find device "nvmf_init_br2" 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:17.845 Cannot find device "nvmf_tgt_br" 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.845 Cannot find device "nvmf_tgt_br2" 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:17.845 Cannot find device "nvmf_init_br" 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:17.845 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:18.104 Cannot find device "nvmf_init_br2" 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:18.104 Cannot find device "nvmf_tgt_br" 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:18.104 Cannot find device "nvmf_tgt_br2" 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:18.104 Cannot find device "nvmf_br" 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:18.104 Cannot find device "nvmf_init_if" 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:18.104 Cannot find device "nvmf_init_if2" 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:18.104 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:18.105 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:18.105 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:18.105 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:18.105 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:18.363 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.363 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:17:18.363 00:17:18.363 --- 10.0.0.3 ping statistics --- 00:17:18.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.363 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:18.363 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:18.363 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:18.363 00:17:18.363 --- 10.0.0.4 ping statistics --- 00:17:18.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.363 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:18.363 00:17:18.363 --- 10.0.0.1 ping statistics --- 00:17:18.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.363 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:18.363 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:18.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:17:18.363 00:17:18.363 --- 10.0.0.2 ping statistics --- 00:17:18.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.364 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=74229 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 74229 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74229 ']' 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.364 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:18.364 [2024-11-20 08:50:49.200039] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:18.364 [2024-11-20 08:50:49.200179] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.622 [2024-11-20 08:50:49.355375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.622 [2024-11-20 08:50:49.439506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.622 [2024-11-20 08:50:49.439588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.622 [2024-11-20 08:50:49.439609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.622 [2024-11-20 08:50:49.439620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.622 [2024-11-20 08:50:49.439629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.622 [2024-11-20 08:50:49.440156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.622 [2024-11-20 08:50:49.516306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=74261 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3e19af20-5000-4a35-9ff2-ce90f2234eed 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=2b63ae27-fa31-4830-be2e-fe3e475ad2ec 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=53b6a654-f0d4-4939-b33a-5dcb73e8ff51 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:19.564 null0 00:17:19.564 null1 00:17:19.564 null2 00:17:19.564 [2024-11-20 08:50:50.327560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.564 [2024-11-20 08:50:50.329517] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:19.564 [2024-11-20 08:50:50.329591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74261 ] 00:17:19.564 [2024-11-20 08:50:50.351700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 74261 /var/tmp/tgt2.sock 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74261 ']' 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.564 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:19.564 [2024-11-20 08:50:50.477574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.823 [2024-11-20 08:50:50.566695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.823 [2024-11-20 08:50:50.668590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:20.081 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.081 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:20.081 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:20.648 [2024-11-20 08:50:51.344672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.648 [2024-11-20 08:50:51.360761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:20.648 nvme0n1 nvme0n2 00:17:20.648 nvme1n1 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:20.648 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3e19af20-5000-4a35-9ff2-ce90f2234eed 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3e19af2050004a359ff2ce90f2234eed 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3E19AF2050004A359FF2CE90F2234EED 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3E19AF2050004A359FF2CE90F2234EED == \3\E\1\9\A\F\2\0\5\0\0\0\4\A\3\5\9\F\F\2\C\E\9\0\F\2\2\3\4\E\E\D ]] 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 2b63ae27-fa31-4830-be2e-fe3e475ad2ec 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2b63ae27fa314830be2efe3e475ad2ec 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2B63AE27FA314830BE2EFE3E475AD2EC 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 2B63AE27FA314830BE2EFE3E475AD2EC == \2\B\6\3\A\E\2\7\F\A\3\1\4\8\3\0\B\E\2\E\F\E\3\E\4\7\5\A\D\2\E\C ]] 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:22.024 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 53b6a654-f0d4-4939-b33a-5dcb73e8ff51 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=53b6a654f0d44939b33a5dcb73e8ff51 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 53B6A654F0D44939B33A5DCB73E8FF51 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 53B6A654F0D44939B33A5DCB73E8FF51 == \5\3\B\6\A\6\5\4\F\0\D\4\4\9\3\9\B\3\3\A\5\D\C\B\7\3\E\8\F\F\5\1 ]] 00:17:22.025 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:22.284 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:22.284 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:22.284 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 74261 00:17:22.284 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74261 ']' 00:17:22.284 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74261 00:17:22.284 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:22.284 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.284 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74261 00:17:22.284 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:22.284 killing process with pid 74261 00:17:22.284 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:22.284 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74261' 00:17:22.284 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74261 00:17:22.284 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74261 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.852 rmmod nvme_tcp 00:17:22.852 rmmod nvme_fabrics 00:17:22.852 rmmod nvme_keyring 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 74229 ']' 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 74229 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74229 ']' 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74229 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74229 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.852 killing process with pid 74229 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74229' 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74229 00:17:22.852 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74229 00:17:23.127 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.127 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.127 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.127 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:23.127 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:23.127 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.128 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.128 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.128 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:23.128 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:23.128 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:23.128 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:23.128 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.128 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:23.128 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:23.387 00:17:23.387 real 0m5.717s 00:17:23.387 user 0m8.342s 00:17:23.387 sys 0m1.934s 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:23.387 ************************************ 00:17:23.387 END TEST nvmf_nsid 00:17:23.387 ************************************ 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:23.387 00:17:23.387 real 5m26.829s 00:17:23.387 user 11m27.190s 00:17:23.387 sys 1m12.234s 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.387 08:50:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.387 ************************************ 00:17:23.387 END TEST nvmf_target_extra 00:17:23.387 ************************************ 00:17:23.387 08:50:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:23.387 08:50:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.387 08:50:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.387 08:50:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:23.387 ************************************ 00:17:23.387 START TEST nvmf_host 00:17:23.387 ************************************ 00:17:23.387 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:23.648 * Looking for test storage... 00:17:23.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:23.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.648 --rc genhtml_branch_coverage=1 00:17:23.648 --rc genhtml_function_coverage=1 00:17:23.648 --rc genhtml_legend=1 00:17:23.648 --rc geninfo_all_blocks=1 00:17:23.648 --rc geninfo_unexecuted_blocks=1 00:17:23.648 00:17:23.648 ' 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:23.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.648 --rc genhtml_branch_coverage=1 00:17:23.648 --rc genhtml_function_coverage=1 00:17:23.648 --rc genhtml_legend=1 00:17:23.648 --rc geninfo_all_blocks=1 00:17:23.648 --rc geninfo_unexecuted_blocks=1 00:17:23.648 00:17:23.648 ' 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:23.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.648 --rc genhtml_branch_coverage=1 00:17:23.648 --rc genhtml_function_coverage=1 00:17:23.648 --rc genhtml_legend=1 00:17:23.648 --rc geninfo_all_blocks=1 00:17:23.648 --rc geninfo_unexecuted_blocks=1 00:17:23.648 00:17:23.648 ' 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:23.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.648 --rc genhtml_branch_coverage=1 00:17:23.648 --rc genhtml_function_coverage=1 00:17:23.648 --rc genhtml_legend=1 00:17:23.648 --rc geninfo_all_blocks=1 00:17:23.648 --rc geninfo_unexecuted_blocks=1 00:17:23.648 00:17:23.648 ' 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.648 08:50:54 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.649 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.649 ************************************ 00:17:23.649 START TEST nvmf_identify 00:17:23.649 ************************************ 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:23.649 * Looking for test storage... 00:17:23.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:23.649 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.915 --rc genhtml_branch_coverage=1 00:17:23.915 --rc genhtml_function_coverage=1 00:17:23.915 --rc genhtml_legend=1 00:17:23.915 --rc geninfo_all_blocks=1 00:17:23.915 --rc geninfo_unexecuted_blocks=1 00:17:23.915 00:17:23.915 ' 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.915 --rc genhtml_branch_coverage=1 00:17:23.915 --rc genhtml_function_coverage=1 00:17:23.915 --rc genhtml_legend=1 00:17:23.915 --rc geninfo_all_blocks=1 00:17:23.915 --rc geninfo_unexecuted_blocks=1 00:17:23.915 00:17:23.915 ' 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.915 --rc genhtml_branch_coverage=1 00:17:23.915 --rc genhtml_function_coverage=1 00:17:23.915 --rc genhtml_legend=1 00:17:23.915 --rc geninfo_all_blocks=1 00:17:23.915 --rc geninfo_unexecuted_blocks=1 00:17:23.915 00:17:23.915 ' 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.915 --rc genhtml_branch_coverage=1 00:17:23.915 --rc genhtml_function_coverage=1 00:17:23.915 --rc genhtml_legend=1 00:17:23.915 --rc geninfo_all_blocks=1 00:17:23.915 --rc geninfo_unexecuted_blocks=1 00:17:23.915 00:17:23.915 ' 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.915 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:23.915 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:23.916 Cannot find device "nvmf_init_br" 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:23.916 Cannot find device "nvmf_init_br2" 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:23.916 Cannot find device "nvmf_tgt_br" 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.916 Cannot find device "nvmf_tgt_br2" 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:23.916 Cannot find device "nvmf_init_br" 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:23.916 Cannot find device "nvmf_init_br2" 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:23.916 Cannot find device "nvmf_tgt_br" 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:23.916 Cannot find device "nvmf_tgt_br2" 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:23.916 Cannot find device "nvmf_br" 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:23.916 Cannot find device "nvmf_init_if" 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:23.916 Cannot find device "nvmf_init_if2" 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:23.916 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:24.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:24.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:24.175 08:50:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:24.175 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:24.175 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:24.175 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:24.175 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:24.175 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:24.176 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:24.176 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:24.176 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:24.176 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:24.176 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:24.176 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:24.176 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:24.176 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:17:24.176 00:17:24.176 --- 10.0.0.3 ping statistics --- 00:17:24.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.176 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:24.176 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:24.176 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:24.176 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:17:24.176 00:17:24.176 --- 10.0.0.4 ping statistics --- 00:17:24.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.176 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:24.176 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:24.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:24.176 00:17:24.176 --- 10.0.0.1 ping statistics --- 00:17:24.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.176 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:24.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:17:24.435 00:17:24.435 --- 10.0.0.2 ping statistics --- 00:17:24.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.435 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74624 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74624 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74624 ']' 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.435 08:50:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:24.435 [2024-11-20 08:50:55.193626] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:24.436 [2024-11-20 08:50:55.193722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.694 [2024-11-20 08:50:55.350301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.694 [2024-11-20 08:50:55.450030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.694 [2024-11-20 08:50:55.450096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.694 [2024-11-20 08:50:55.450112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.694 [2024-11-20 08:50:55.450132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.694 [2024-11-20 08:50:55.450143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.694 [2024-11-20 08:50:55.451663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.694 [2024-11-20 08:50:55.451763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.694 [2024-11-20 08:50:55.451839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.694 [2024-11-20 08:50:55.451845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.694 [2024-11-20 08:50:55.531307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.630 [2024-11-20 08:50:56.239609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.630 Malloc0 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.630 [2024-11-20 08:50:56.358529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.630 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.630 [ 00:17:25.630 { 00:17:25.630 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:25.630 "subtype": "Discovery", 00:17:25.630 "listen_addresses": [ 00:17:25.630 { 00:17:25.630 "trtype": "TCP", 00:17:25.630 "adrfam": "IPv4", 00:17:25.630 "traddr": "10.0.0.3", 00:17:25.630 "trsvcid": "4420" 00:17:25.630 } 00:17:25.630 ], 00:17:25.630 "allow_any_host": true, 00:17:25.630 "hosts": [] 00:17:25.630 }, 00:17:25.630 { 00:17:25.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.630 "subtype": "NVMe", 00:17:25.630 "listen_addresses": [ 00:17:25.630 { 00:17:25.630 "trtype": "TCP", 00:17:25.630 "adrfam": "IPv4", 00:17:25.630 "traddr": "10.0.0.3", 00:17:25.630 "trsvcid": "4420" 00:17:25.630 } 00:17:25.630 ], 00:17:25.631 "allow_any_host": true, 00:17:25.631 "hosts": [], 00:17:25.631 "serial_number": "SPDK00000000000001", 00:17:25.631 "model_number": "SPDK bdev Controller", 00:17:25.631 "max_namespaces": 32, 00:17:25.631 "min_cntlid": 1, 00:17:25.631 "max_cntlid": 65519, 00:17:25.631 "namespaces": [ 00:17:25.631 { 00:17:25.631 "nsid": 1, 00:17:25.631 "bdev_name": "Malloc0", 00:17:25.631 "name": "Malloc0", 00:17:25.631 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:25.631 "eui64": "ABCDEF0123456789", 00:17:25.631 "uuid": "f6f59be0-dfe1-4c31-9f82-771fc2a7192e" 00:17:25.631 } 00:17:25.631 ] 00:17:25.631 } 00:17:25.631 ] 00:17:25.631 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.631 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:25.631 [2024-11-20 08:50:56.412163] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:25.631 [2024-11-20 08:50:56.412374] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74663 ] 00:17:25.893 [2024-11-20 08:50:56.575000] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:25.893 [2024-11-20 08:50:56.575082] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:25.893 [2024-11-20 08:50:56.575092] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:25.893 [2024-11-20 08:50:56.575108] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:25.893 [2024-11-20 08:50:56.575124] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:25.893 [2024-11-20 08:50:56.575496] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:25.893 [2024-11-20 08:50:56.575589] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19c5750 0 00:17:25.893 [2024-11-20 08:50:56.589834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:25.893 [2024-11-20 08:50:56.589867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:25.893 [2024-11-20 08:50:56.589875] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:25.893 [2024-11-20 08:50:56.589880] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:25.893 [2024-11-20 08:50:56.589920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.589930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.589935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c5750) 00:17:25.893 [2024-11-20 08:50:56.589953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:25.893 [2024-11-20 08:50:56.589993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29740, cid 0, qid 0 00:17:25.893 [2024-11-20 08:50:56.597828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.893 [2024-11-20 08:50:56.597857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.893 [2024-11-20 08:50:56.597865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.597871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29740) on tqpair=0x19c5750 00:17:25.893 [2024-11-20 08:50:56.597889] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:25.893 [2024-11-20 08:50:56.597901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:25.893 [2024-11-20 08:50:56.597908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:25.893 [2024-11-20 08:50:56.597930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.597937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.597941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c5750) 00:17:25.893 [2024-11-20 08:50:56.597952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.893 [2024-11-20 08:50:56.597986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29740, cid 0, qid 0 00:17:25.893 [2024-11-20 08:50:56.598053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.893 [2024-11-20 08:50:56.598063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.893 [2024-11-20 08:50:56.598068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29740) on tqpair=0x19c5750 00:17:25.893 [2024-11-20 08:50:56.598081] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:25.893 [2024-11-20 08:50:56.598090] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:25.893 [2024-11-20 08:50:56.598101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c5750) 00:17:25.893 [2024-11-20 08:50:56.598120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.893 [2024-11-20 08:50:56.598145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29740, cid 0, qid 0 00:17:25.893 [2024-11-20 08:50:56.598202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.893 [2024-11-20 08:50:56.598211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.893 [2024-11-20 08:50:56.598219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29740) on tqpair=0x19c5750 00:17:25.893 [2024-11-20 08:50:56.598231] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:25.893 [2024-11-20 08:50:56.598241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:25.893 [2024-11-20 08:50:56.598251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c5750) 00:17:25.893 [2024-11-20 08:50:56.598270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.893 [2024-11-20 08:50:56.598293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29740, cid 0, qid 0 00:17:25.893 [2024-11-20 08:50:56.598334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.893 [2024-11-20 08:50:56.598343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.893 [2024-11-20 08:50:56.598347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29740) on tqpair=0x19c5750 00:17:25.893 [2024-11-20 08:50:56.598360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:25.893 [2024-11-20 08:50:56.598373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c5750) 00:17:25.893 [2024-11-20 08:50:56.598392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.893 [2024-11-20 08:50:56.598415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29740, cid 0, qid 0 00:17:25.893 [2024-11-20 08:50:56.598463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.893 [2024-11-20 08:50:56.598478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.893 [2024-11-20 08:50:56.598483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29740) on tqpair=0x19c5750 00:17:25.893 [2024-11-20 08:50:56.598495] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:25.893 [2024-11-20 08:50:56.598501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:25.893 [2024-11-20 08:50:56.598512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:25.893 [2024-11-20 08:50:56.598627] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:25.893 [2024-11-20 08:50:56.598635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:25.893 [2024-11-20 08:50:56.598648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c5750) 00:17:25.893 [2024-11-20 08:50:56.598667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.893 [2024-11-20 08:50:56.598694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29740, cid 0, qid 0 00:17:25.893 [2024-11-20 08:50:56.598737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.893 [2024-11-20 08:50:56.598747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.893 [2024-11-20 08:50:56.598752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.893 [2024-11-20 08:50:56.598757] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29740) on tqpair=0x19c5750 00:17:25.894 [2024-11-20 08:50:56.598763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:25.894 [2024-11-20 08:50:56.598776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.598782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.598787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c5750) 00:17:25.894 [2024-11-20 08:50:56.598796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.894 [2024-11-20 08:50:56.598841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29740, cid 0, qid 0 00:17:25.894 [2024-11-20 08:50:56.598887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.894 [2024-11-20 08:50:56.598897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.894 [2024-11-20 08:50:56.598901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.598928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29740) on tqpair=0x19c5750 00:17:25.894 [2024-11-20 08:50:56.598935] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:25.894 [2024-11-20 08:50:56.598941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:25.894 [2024-11-20 08:50:56.598952] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:25.894 [2024-11-20 08:50:56.598973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:25.894 [2024-11-20 08:50:56.598988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.598994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c5750) 00:17:25.894 [2024-11-20 08:50:56.599003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.894 [2024-11-20 08:50:56.599032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29740, cid 0, qid 0 00:17:25.894 [2024-11-20 08:50:56.599128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:25.894 [2024-11-20 08:50:56.599137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:25.894 [2024-11-20 08:50:56.599142] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599147] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c5750): datao=0, datal=4096, cccid=0 00:17:25.894 [2024-11-20 08:50:56.599153] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a29740) on tqpair(0x19c5750): expected_datao=0, payload_size=4096 00:17:25.894 [2024-11-20 08:50:56.599159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599170] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599176] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.894 [2024-11-20 08:50:56.599195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.894 [2024-11-20 08:50:56.599199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29740) on tqpair=0x19c5750 00:17:25.894 [2024-11-20 08:50:56.599215] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:25.894 [2024-11-20 08:50:56.599222] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:25.894 [2024-11-20 08:50:56.599227] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:25.894 [2024-11-20 08:50:56.599234] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:25.894 [2024-11-20 08:50:56.599239] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:25.894 [2024-11-20 08:50:56.599246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:25.894 [2024-11-20 08:50:56.599264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:25.894 [2024-11-20 08:50:56.599276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c5750) 00:17:25.894 [2024-11-20 08:50:56.599295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:25.894 [2024-11-20 08:50:56.599320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29740, cid 0, qid 0 00:17:25.894 [2024-11-20 08:50:56.599374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.894 [2024-11-20 08:50:56.599383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.894 [2024-11-20 08:50:56.599388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29740) on tqpair=0x19c5750 00:17:25.894 [2024-11-20 08:50:56.599403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c5750) 00:17:25.894 [2024-11-20 08:50:56.599421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.894 [2024-11-20 08:50:56.599429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19c5750) 00:17:25.894 [2024-11-20 08:50:56.599457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.894 [2024-11-20 08:50:56.599471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19c5750) 00:17:25.894 [2024-11-20 08:50:56.599490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.894 [2024-11-20 08:50:56.599497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c5750) 00:17:25.894 [2024-11-20 08:50:56.599513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.894 [2024-11-20 08:50:56.599520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:25.894 [2024-11-20 08:50:56.599540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:25.894 [2024-11-20 08:50:56.599551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.894 [2024-11-20 08:50:56.599555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c5750) 00:17:25.894 [2024-11-20 08:50:56.599564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.894 [2024-11-20 08:50:56.599592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29740, cid 0, qid 0 00:17:25.894 [2024-11-20 08:50:56.599602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a298c0, cid 1, qid 0 00:17:25.894 [2024-11-20 08:50:56.599608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29a40, cid 2, qid 0 00:17:25.894 [2024-11-20 08:50:56.599613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29bc0, cid 3, qid 0 00:17:25.894 [2024-11-20 08:50:56.599619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29d40, cid 4, qid 0 00:17:25.894 [2024-11-20 08:50:56.599699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.895 [2024-11-20 08:50:56.599708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.895 [2024-11-20 08:50:56.599712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.599717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29d40) on tqpair=0x19c5750 00:17:25.895 [2024-11-20 08:50:56.599725] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:25.895 [2024-11-20 08:50:56.599731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:25.895 [2024-11-20 08:50:56.599745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.599752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c5750) 00:17:25.895 [2024-11-20 08:50:56.599761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.895 [2024-11-20 08:50:56.599785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29d40, cid 4, qid 0 00:17:25.895 [2024-11-20 08:50:56.599873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:25.895 [2024-11-20 08:50:56.599884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:25.895 [2024-11-20 08:50:56.599889] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.599894] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c5750): datao=0, datal=4096, cccid=4 00:17:25.895 [2024-11-20 08:50:56.599899] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a29d40) on tqpair(0x19c5750): expected_datao=0, payload_size=4096 00:17:25.895 [2024-11-20 08:50:56.599905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.599913] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.599919] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.599929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.895 [2024-11-20 08:50:56.599937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.895 [2024-11-20 08:50:56.599941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.599946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29d40) on tqpair=0x19c5750 00:17:25.895 [2024-11-20 08:50:56.599964] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:25.895 [2024-11-20 08:50:56.600007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c5750) 00:17:25.895 [2024-11-20 08:50:56.600024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.895 [2024-11-20 08:50:56.600033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19c5750) 00:17:25.895 [2024-11-20 08:50:56.600051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.895 [2024-11-20 08:50:56.600084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29d40, cid 4, qid 0 00:17:25.895 [2024-11-20 08:50:56.600095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29ec0, cid 5, qid 0 00:17:25.895 [2024-11-20 08:50:56.600202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:25.895 [2024-11-20 08:50:56.600222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:25.895 [2024-11-20 08:50:56.600229] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600234] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c5750): datao=0, datal=1024, cccid=4 00:17:25.895 [2024-11-20 08:50:56.600240] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a29d40) on tqpair(0x19c5750): expected_datao=0, payload_size=1024 00:17:25.895 [2024-11-20 08:50:56.600245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600254] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600259] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.895 [2024-11-20 08:50:56.600273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.895 [2024-11-20 08:50:56.600277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29ec0) on tqpair=0x19c5750 00:17:25.895 [2024-11-20 08:50:56.600307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.895 [2024-11-20 08:50:56.600317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.895 [2024-11-20 08:50:56.600322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29d40) on tqpair=0x19c5750 00:17:25.895 [2024-11-20 08:50:56.600342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c5750) 00:17:25.895 [2024-11-20 08:50:56.600357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.895 [2024-11-20 08:50:56.600387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29d40, cid 4, qid 0 00:17:25.895 [2024-11-20 08:50:56.600465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:25.895 [2024-11-20 08:50:56.600479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:25.895 [2024-11-20 08:50:56.600484] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600489] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c5750): datao=0, datal=3072, cccid=4 00:17:25.895 [2024-11-20 08:50:56.600494] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a29d40) on tqpair(0x19c5750): expected_datao=0, payload_size=3072 00:17:25.895 [2024-11-20 08:50:56.600499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600507] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600512] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.895 [2024-11-20 08:50:56.600547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.895 [2024-11-20 08:50:56.600551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29d40) on tqpair=0x19c5750 00:17:25.895 [2024-11-20 08:50:56.600570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.895 [2024-11-20 08:50:56.600576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c5750) 00:17:25.895 [2024-11-20 08:50:56.600585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.895 [2024-11-20 08:50:56.600619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29d40, cid 4, qid 0 00:17:25.895 [2024-11-20 08:50:56.600682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:25.895 [2024-11-20 08:50:56.600691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:25.895 ===================================================== 00:17:25.895 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:25.895 ===================================================== 00:17:25.895 Controller Capabilities/Features 00:17:25.895 ================================ 00:17:25.895 Vendor ID: 0000 00:17:25.895 Subsystem Vendor ID: 0000 00:17:25.895 Serial Number: .................... 00:17:25.895 Model Number: ........................................ 00:17:25.895 Firmware Version: 25.01 00:17:25.895 Recommended Arb Burst: 0 00:17:25.895 IEEE OUI Identifier: 00 00 00 00:17:25.895 Multi-path I/O 00:17:25.895 May have multiple subsystem ports: No 00:17:25.895 May have multiple controllers: No 00:17:25.895 Associated with SR-IOV VF: No 00:17:25.895 Max Data Transfer Size: 131072 00:17:25.896 Max Number of Namespaces: 0 00:17:25.896 Max Number of I/O Queues: 1024 00:17:25.896 NVMe Specification Version (VS): 1.3 00:17:25.896 NVMe Specification Version (Identify): 1.3 00:17:25.896 Maximum Queue Entries: 128 00:17:25.896 Contiguous Queues Required: Yes 00:17:25.896 Arbitration Mechanisms Supported 00:17:25.896 Weighted Round Robin: Not Supported 00:17:25.896 Vendor Specific: Not Supported 00:17:25.896 Reset Timeout: 15000 ms 00:17:25.896 Doorbell Stride: 4 bytes 00:17:25.896 NVM Subsystem Reset: Not Supported 00:17:25.896 Command Sets Supported 00:17:25.896 NVM Command Set: Supported 00:17:25.896 Boot Partition: Not Supported 00:17:25.896 Memory Page Size Minimum: 4096 bytes 00:17:25.896 Memory Page Size Maximum: 4096 bytes 00:17:25.896 Persistent Memory Region: Not Supported 00:17:25.896 Optional Asynchronous Events Supported 00:17:25.896 Namespace Attribute Notices: Not Supported 00:17:25.896 Firmware Activation Notices: Not Supported 00:17:25.896 ANA Change Notices: Not Supported 00:17:25.896 PLE Aggregate Log Change Notices: Not Supported 00:17:25.896 LBA Status Info Alert Notices: Not Supported 00:17:25.896 EGE Aggregate Log Change Notices: Not Supported 00:17:25.896 Normal NVM Subsystem Shutdown event: Not Supported 00:17:25.896 Zone Descriptor Change Notices: Not Supported 00:17:25.896 Discovery Log Change Notices: Supported 00:17:25.896 Controller Attributes 00:17:25.896 128-bit Host Identifier: Not Supported 00:17:25.896 Non-Operational Permissive Mode: Not Supported 00:17:25.896 NVM Sets: Not Supported 00:17:25.896 Read Recovery Levels: Not Supported 00:17:25.896 Endurance Groups: Not Supported 00:17:25.896 Predictable Latency Mode: Not Supported 00:17:25.896 Traffic Based Keep ALive: Not Supported 00:17:25.896 Namespace Granularity: Not Supported 00:17:25.896 SQ Associations: Not Supported 00:17:25.896 UUID List: Not Supported 00:17:25.896 Multi-Domain Subsystem: Not Supported 00:17:25.896 Fixed Capacity Management: Not Supported 00:17:25.896 Variable Capacity Management: Not Supported 00:17:25.896 Delete Endurance Group: Not Supported 00:17:25.896 Delete NVM Set: Not Supported 00:17:25.896 Extended LBA Formats Supported: Not Supported 00:17:25.896 Flexible Data Placement Supported: Not Supported 00:17:25.896 00:17:25.896 Controller Memory Buffer Support 00:17:25.896 ================================ 00:17:25.896 Supported: No 00:17:25.896 00:17:25.896 Persistent Memory Region Support 00:17:25.896 ================================ 00:17:25.896 Supported: No 00:17:25.896 00:17:25.896 Admin Command Set Attributes 00:17:25.896 ============================ 00:17:25.896 Security Send/Receive: Not Supported 00:17:25.896 Format NVM: Not Supported 00:17:25.896 Firmware Activate/Download: Not Supported 00:17:25.896 Namespace Management: Not Supported 00:17:25.896 Device Self-Test: Not Supported 00:17:25.896 Directives: Not Supported 00:17:25.896 NVMe-MI: Not Supported 00:17:25.896 Virtualization Management: Not Supported 00:17:25.896 Doorbell Buffer Config: Not Supported 00:17:25.896 Get LBA Status Capability: Not Supported 00:17:25.896 Command & Feature Lockdown Capability: Not Supported 00:17:25.896 Abort Command Limit: 1 00:17:25.896 Async Event Request Limit: 4 00:17:25.896 Number of Firmware Slots: N/A 00:17:25.896 Firmware Slot 1 Read-Only: N/A 00:17:25.896 Firm[2024-11-20 08:50:56.600696] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:25.896 [2024-11-20 08:50:56.600700] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c5750): datao=0, datal=8, cccid=4 00:17:25.896 [2024-11-20 08:50:56.600706] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a29d40) on tqpair(0x19c5750): expected_datao=0, payload_size=8 00:17:25.896 [2024-11-20 08:50:56.600711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.896 [2024-11-20 08:50:56.600719] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:25.896 [2024-11-20 08:50:56.600724] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:25.896 [2024-11-20 08:50:56.600743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.896 [2024-11-20 08:50:56.600753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.896 [2024-11-20 08:50:56.600757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.896 [2024-11-20 08:50:56.600762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29d40) on tqpair=0x19c5750 00:17:25.896 ware Activation Without Reset: N/A 00:17:25.896 Multiple Update Detection Support: N/A 00:17:25.896 Firmware Update Granularity: No Information Provided 00:17:25.896 Per-Namespace SMART Log: No 00:17:25.896 Asymmetric Namespace Access Log Page: Not Supported 00:17:25.896 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:25.896 Command Effects Log Page: Not Supported 00:17:25.896 Get Log Page Extended Data: Supported 00:17:25.896 Telemetry Log Pages: Not Supported 00:17:25.896 Persistent Event Log Pages: Not Supported 00:17:25.896 Supported Log Pages Log Page: May Support 00:17:25.896 Commands Supported & Effects Log Page: Not Supported 00:17:25.896 Feature Identifiers & Effects Log Page:May Support 00:17:25.896 NVMe-MI Commands & Effects Log Page: May Support 00:17:25.896 Data Area 4 for Telemetry Log: Not Supported 00:17:25.896 Error Log Page Entries Supported: 128 00:17:25.896 Keep Alive: Not Supported 00:17:25.896 00:17:25.896 NVM Command Set Attributes 00:17:25.896 ========================== 00:17:25.896 Submission Queue Entry Size 00:17:25.896 Max: 1 00:17:25.896 Min: 1 00:17:25.896 Completion Queue Entry Size 00:17:25.896 Max: 1 00:17:25.896 Min: 1 00:17:25.896 Number of Namespaces: 0 00:17:25.896 Compare Command: Not Supported 00:17:25.896 Write Uncorrectable Command: Not Supported 00:17:25.896 Dataset Management Command: Not Supported 00:17:25.896 Write Zeroes Command: Not Supported 00:17:25.896 Set Features Save Field: Not Supported 00:17:25.896 Reservations: Not Supported 00:17:25.896 Timestamp: Not Supported 00:17:25.896 Copy: Not Supported 00:17:25.896 Volatile Write Cache: Not Present 00:17:25.896 Atomic Write Unit (Normal): 1 00:17:25.896 Atomic Write Unit (PFail): 1 00:17:25.896 Atomic Compare & Write Unit: 1 00:17:25.896 Fused Compare & Write: Supported 00:17:25.896 Scatter-Gather List 00:17:25.896 SGL Command Set: Supported 00:17:25.896 SGL Keyed: Supported 00:17:25.896 SGL Bit Bucket Descriptor: Not Supported 00:17:25.896 SGL Metadata Pointer: Not Supported 00:17:25.896 Oversized SGL: Not Supported 00:17:25.896 SGL Metadata Address: Not Supported 00:17:25.896 SGL Offset: Supported 00:17:25.896 Transport SGL Data Block: Not Supported 00:17:25.896 Replay Protected Memory Block: Not Supported 00:17:25.896 00:17:25.896 Firmware Slot Information 00:17:25.896 ========================= 00:17:25.896 Active slot: 0 00:17:25.896 00:17:25.896 00:17:25.896 Error Log 00:17:25.896 ========= 00:17:25.896 00:17:25.896 Active Namespaces 00:17:25.897 ================= 00:17:25.897 Discovery Log Page 00:17:25.897 ================== 00:17:25.897 Generation Counter: 2 00:17:25.897 Number of Records: 2 00:17:25.897 Record Format: 0 00:17:25.897 00:17:25.897 Discovery Log Entry 0 00:17:25.897 ---------------------- 00:17:25.897 Transport Type: 3 (TCP) 00:17:25.897 Address Family: 1 (IPv4) 00:17:25.897 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:25.897 Entry Flags: 00:17:25.897 Duplicate Returned Information: 1 00:17:25.897 Explicit Persistent Connection Support for Discovery: 1 00:17:25.897 Transport Requirements: 00:17:25.897 Secure Channel: Not Required 00:17:25.897 Port ID: 0 (0x0000) 00:17:25.897 Controller ID: 65535 (0xffff) 00:17:25.897 Admin Max SQ Size: 128 00:17:25.897 Transport Service Identifier: 4420 00:17:25.897 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:25.897 Transport Address: 10.0.0.3 00:17:25.897 Discovery Log Entry 1 00:17:25.897 ---------------------- 00:17:25.897 Transport Type: 3 (TCP) 00:17:25.897 Address Family: 1 (IPv4) 00:17:25.897 Subsystem Type: 2 (NVM Subsystem) 00:17:25.897 Entry Flags: 00:17:25.897 Duplicate Returned Information: 0 00:17:25.897 Explicit Persistent Connection Support for Discovery: 0 00:17:25.897 Transport Requirements: 00:17:25.897 Secure Channel: Not Required 00:17:25.897 Port ID: 0 (0x0000) 00:17:25.897 Controller ID: 65535 (0xffff) 00:17:25.897 Admin Max SQ Size: 128 00:17:25.897 Transport Service Identifier: 4420 00:17:25.897 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:25.897 Transport Address: 10.0.0.3 [2024-11-20 08:50:56.600894] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:25.897 [2024-11-20 08:50:56.600913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29740) on tqpair=0x19c5750 00:17:25.897 [2024-11-20 08:50:56.600923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.897 [2024-11-20 08:50:56.600930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a298c0) on tqpair=0x19c5750 00:17:25.897 [2024-11-20 08:50:56.600935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.897 [2024-11-20 08:50:56.600941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29a40) on tqpair=0x19c5750 00:17:25.897 [2024-11-20 08:50:56.600947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.897 [2024-11-20 08:50:56.600953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29bc0) on tqpair=0x19c5750 00:17:25.897 [2024-11-20 08:50:56.600958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.897 [2024-11-20 08:50:56.600969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.600975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.600980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c5750) 00:17:25.897 [2024-11-20 08:50:56.600994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.897 [2024-11-20 08:50:56.601023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29bc0, cid 3, qid 0 00:17:25.897 [2024-11-20 08:50:56.601074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.897 [2024-11-20 08:50:56.601083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.897 [2024-11-20 08:50:56.601088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29bc0) on tqpair=0x19c5750 00:17:25.897 [2024-11-20 08:50:56.601104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c5750) 00:17:25.897 [2024-11-20 08:50:56.601123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.897 [2024-11-20 08:50:56.601150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29bc0, cid 3, qid 0 00:17:25.897 [2024-11-20 08:50:56.601216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.897 [2024-11-20 08:50:56.601224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.897 [2024-11-20 08:50:56.601229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29bc0) on tqpair=0x19c5750 00:17:25.897 [2024-11-20 08:50:56.601240] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:25.897 [2024-11-20 08:50:56.601245] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:25.897 [2024-11-20 08:50:56.601258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c5750) 00:17:25.897 [2024-11-20 08:50:56.601277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.897 [2024-11-20 08:50:56.601299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29bc0, cid 3, qid 0 00:17:25.897 [2024-11-20 08:50:56.601343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.897 [2024-11-20 08:50:56.601352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.897 [2024-11-20 08:50:56.601357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29bc0) on tqpair=0x19c5750 00:17:25.897 [2024-11-20 08:50:56.601375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c5750) 00:17:25.897 [2024-11-20 08:50:56.601395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.897 [2024-11-20 08:50:56.601416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29bc0, cid 3, qid 0 00:17:25.897 [2024-11-20 08:50:56.601460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.897 [2024-11-20 08:50:56.601477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.897 [2024-11-20 08:50:56.601485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29bc0) on tqpair=0x19c5750 00:17:25.897 [2024-11-20 08:50:56.601505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c5750) 00:17:25.897 [2024-11-20 08:50:56.601525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.897 [2024-11-20 08:50:56.601555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29bc0, cid 3, qid 0 00:17:25.897 [2024-11-20 08:50:56.601597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.897 [2024-11-20 08:50:56.601606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.897 [2024-11-20 08:50:56.601610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29bc0) on tqpair=0x19c5750 00:17:25.897 [2024-11-20 08:50:56.601628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.897 [2024-11-20 08:50:56.601639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c5750) 00:17:25.898 [2024-11-20 08:50:56.601647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.898 [2024-11-20 08:50:56.601669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29bc0, cid 3, qid 0 00:17:25.898 [2024-11-20 08:50:56.601716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.898 [2024-11-20 08:50:56.601724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.898 [2024-11-20 08:50:56.601729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.898 [2024-11-20 08:50:56.601734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29bc0) on tqpair=0x19c5750 00:17:25.898 [2024-11-20 08:50:56.601747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.898 [2024-11-20 08:50:56.601753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.898 [2024-11-20 08:50:56.601757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c5750) 00:17:25.898 [2024-11-20 08:50:56.601766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.898 [2024-11-20 08:50:56.601788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29bc0, cid 3, qid 0 00:17:25.898 [2024-11-20 08:50:56.605821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.898 [2024-11-20 08:50:56.605847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.898 [2024-11-20 08:50:56.605854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.898 [2024-11-20 08:50:56.605859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29bc0) on tqpair=0x19c5750 00:17:25.898 [2024-11-20 08:50:56.605877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:25.898 [2024-11-20 08:50:56.605884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:25.898 [2024-11-20 08:50:56.605888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c5750) 00:17:25.898 [2024-11-20 08:50:56.605898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.898 [2024-11-20 08:50:56.605929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a29bc0, cid 3, qid 0 00:17:25.898 [2024-11-20 08:50:56.605983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:25.898 [2024-11-20 08:50:56.605992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:25.898 [2024-11-20 08:50:56.605997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:25.898 [2024-11-20 08:50:56.606001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a29bc0) on tqpair=0x19c5750 00:17:25.898 [2024-11-20 08:50:56.606012] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:17:25.898 00:17:25.898 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:25.898 [2024-11-20 08:50:56.648960] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:25.898 [2024-11-20 08:50:56.649017] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74672 ] 00:17:26.162 [2024-11-20 08:50:56.812951] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:26.162 [2024-11-20 08:50:56.813028] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:26.162 [2024-11-20 08:50:56.813038] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:26.162 [2024-11-20 08:50:56.813053] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:26.162 [2024-11-20 08:50:56.813073] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:26.162 [2024-11-20 08:50:56.813469] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:26.162 [2024-11-20 08:50:56.813558] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6bf750 0 00:17:26.162 [2024-11-20 08:50:56.819837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:26.162 [2024-11-20 08:50:56.819877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:26.162 [2024-11-20 08:50:56.819885] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:26.162 [2024-11-20 08:50:56.819890] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:26.162 [2024-11-20 08:50:56.819928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.819938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.819943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6bf750) 00:17:26.162 [2024-11-20 08:50:56.819959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:26.162 [2024-11-20 08:50:56.819998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723740, cid 0, qid 0 00:17:26.162 [2024-11-20 08:50:56.827837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.162 [2024-11-20 08:50:56.827866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.162 [2024-11-20 08:50:56.827872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.827878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723740) on tqpair=0x6bf750 00:17:26.162 [2024-11-20 08:50:56.827891] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:26.162 [2024-11-20 08:50:56.827901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:26.162 [2024-11-20 08:50:56.827910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:26.162 [2024-11-20 08:50:56.827930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.827937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.827941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6bf750) 00:17:26.162 [2024-11-20 08:50:56.827952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.162 [2024-11-20 08:50:56.827987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723740, cid 0, qid 0 00:17:26.162 [2024-11-20 08:50:56.828075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.162 [2024-11-20 08:50:56.828085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.162 [2024-11-20 08:50:56.828089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723740) on tqpair=0x6bf750 00:17:26.162 [2024-11-20 08:50:56.828101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:26.162 [2024-11-20 08:50:56.828119] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:26.162 [2024-11-20 08:50:56.828129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6bf750) 00:17:26.162 [2024-11-20 08:50:56.828148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.162 [2024-11-20 08:50:56.828174] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723740, cid 0, qid 0 00:17:26.162 [2024-11-20 08:50:56.828262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.162 [2024-11-20 08:50:56.828271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.162 [2024-11-20 08:50:56.828275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723740) on tqpair=0x6bf750 00:17:26.162 [2024-11-20 08:50:56.828287] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:26.162 [2024-11-20 08:50:56.828297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:26.162 [2024-11-20 08:50:56.828307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6bf750) 00:17:26.162 [2024-11-20 08:50:56.828326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.162 [2024-11-20 08:50:56.828349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723740, cid 0, qid 0 00:17:26.162 [2024-11-20 08:50:56.828435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.162 [2024-11-20 08:50:56.828461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.162 [2024-11-20 08:50:56.828471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723740) on tqpair=0x6bf750 00:17:26.162 [2024-11-20 08:50:56.828484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:26.162 [2024-11-20 08:50:56.828499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6bf750) 00:17:26.162 [2024-11-20 08:50:56.828519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.162 [2024-11-20 08:50:56.828569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723740, cid 0, qid 0 00:17:26.162 [2024-11-20 08:50:56.828642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.162 [2024-11-20 08:50:56.828659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.162 [2024-11-20 08:50:56.828665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723740) on tqpair=0x6bf750 00:17:26.162 [2024-11-20 08:50:56.828680] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:26.162 [2024-11-20 08:50:56.828687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:26.162 [2024-11-20 08:50:56.828697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:26.162 [2024-11-20 08:50:56.828813] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:26.162 [2024-11-20 08:50:56.828830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:26.162 [2024-11-20 08:50:56.828843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.162 [2024-11-20 08:50:56.828853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6bf750) 00:17:26.163 [2024-11-20 08:50:56.828862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.163 [2024-11-20 08:50:56.828891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723740, cid 0, qid 0 00:17:26.163 [2024-11-20 08:50:56.828976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.163 [2024-11-20 08:50:56.828985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.163 [2024-11-20 08:50:56.828989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.828994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723740) on tqpair=0x6bf750 00:17:26.163 [2024-11-20 08:50:56.829000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:26.163 [2024-11-20 08:50:56.829013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6bf750) 00:17:26.163 [2024-11-20 08:50:56.829033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.163 [2024-11-20 08:50:56.829057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723740, cid 0, qid 0 00:17:26.163 [2024-11-20 08:50:56.829137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.163 [2024-11-20 08:50:56.829152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.163 [2024-11-20 08:50:56.829157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723740) on tqpair=0x6bf750 00:17:26.163 [2024-11-20 08:50:56.829168] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:26.163 [2024-11-20 08:50:56.829174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:26.163 [2024-11-20 08:50:56.829184] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:26.163 [2024-11-20 08:50:56.829203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:26.163 [2024-11-20 08:50:56.829217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6bf750) 00:17:26.163 [2024-11-20 08:50:56.829232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.163 [2024-11-20 08:50:56.829258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723740, cid 0, qid 0 00:17:26.163 [2024-11-20 08:50:56.829404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:26.163 [2024-11-20 08:50:56.829421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:26.163 [2024-11-20 08:50:56.829427] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829434] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6bf750): datao=0, datal=4096, cccid=0 00:17:26.163 [2024-11-20 08:50:56.829444] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x723740) on tqpair(0x6bf750): expected_datao=0, payload_size=4096 00:17:26.163 [2024-11-20 08:50:56.829456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829472] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829480] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.163 [2024-11-20 08:50:56.829499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.163 [2024-11-20 08:50:56.829504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723740) on tqpair=0x6bf750 00:17:26.163 [2024-11-20 08:50:56.829519] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:26.163 [2024-11-20 08:50:56.829526] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:26.163 [2024-11-20 08:50:56.829531] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:26.163 [2024-11-20 08:50:56.829536] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:26.163 [2024-11-20 08:50:56.829542] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:26.163 [2024-11-20 08:50:56.829548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:26.163 [2024-11-20 08:50:56.829567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:26.163 [2024-11-20 08:50:56.829578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6bf750) 00:17:26.163 [2024-11-20 08:50:56.829597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:26.163 [2024-11-20 08:50:56.829626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723740, cid 0, qid 0 00:17:26.163 [2024-11-20 08:50:56.829710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.163 [2024-11-20 08:50:56.829719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.163 [2024-11-20 08:50:56.829723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723740) on tqpair=0x6bf750 00:17:26.163 [2024-11-20 08:50:56.829738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6bf750) 00:17:26.163 [2024-11-20 08:50:56.829756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.163 [2024-11-20 08:50:56.829764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6bf750) 00:17:26.163 [2024-11-20 08:50:56.829779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.163 [2024-11-20 08:50:56.829787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6bf750) 00:17:26.163 [2024-11-20 08:50:56.829820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.163 [2024-11-20 08:50:56.829829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.163 [2024-11-20 08:50:56.829845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.163 [2024-11-20 08:50:56.829851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:26.163 [2024-11-20 08:50:56.829871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:26.163 [2024-11-20 08:50:56.829881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.829886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6bf750) 00:17:26.163 [2024-11-20 08:50:56.829895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.163 [2024-11-20 08:50:56.829924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723740, cid 0, qid 0 00:17:26.163 [2024-11-20 08:50:56.829934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7238c0, cid 1, qid 0 00:17:26.163 [2024-11-20 08:50:56.829940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723a40, cid 2, qid 0 00:17:26.163 [2024-11-20 08:50:56.829945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.163 [2024-11-20 08:50:56.829951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723d40, cid 4, qid 0 00:17:26.163 [2024-11-20 08:50:56.830088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.163 [2024-11-20 08:50:56.830097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.163 [2024-11-20 08:50:56.830101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.830106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723d40) on tqpair=0x6bf750 00:17:26.163 [2024-11-20 08:50:56.830113] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:26.163 [2024-11-20 08:50:56.830119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:26.163 [2024-11-20 08:50:56.830130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:26.163 [2024-11-20 08:50:56.830145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:26.163 [2024-11-20 08:50:56.830154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.830160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.163 [2024-11-20 08:50:56.830164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6bf750) 00:17:26.164 [2024-11-20 08:50:56.830173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:26.164 [2024-11-20 08:50:56.830199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723d40, cid 4, qid 0 00:17:26.164 [2024-11-20 08:50:56.830284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.164 [2024-11-20 08:50:56.830293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.164 [2024-11-20 08:50:56.830297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723d40) on tqpair=0x6bf750 00:17:26.164 [2024-11-20 08:50:56.830372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.830395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.830408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830414] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6bf750) 00:17:26.164 [2024-11-20 08:50:56.830423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.164 [2024-11-20 08:50:56.830457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723d40, cid 4, qid 0 00:17:26.164 [2024-11-20 08:50:56.830554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:26.164 [2024-11-20 08:50:56.830564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:26.164 [2024-11-20 08:50:56.830568] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830573] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6bf750): datao=0, datal=4096, cccid=4 00:17:26.164 [2024-11-20 08:50:56.830578] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x723d40) on tqpair(0x6bf750): expected_datao=0, payload_size=4096 00:17:26.164 [2024-11-20 08:50:56.830584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830592] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830598] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.164 [2024-11-20 08:50:56.830615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.164 [2024-11-20 08:50:56.830619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723d40) on tqpair=0x6bf750 00:17:26.164 [2024-11-20 08:50:56.830645] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:26.164 [2024-11-20 08:50:56.830661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.830675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.830686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6bf750) 00:17:26.164 [2024-11-20 08:50:56.830701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.164 [2024-11-20 08:50:56.830728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723d40, cid 4, qid 0 00:17:26.164 [2024-11-20 08:50:56.830886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:26.164 [2024-11-20 08:50:56.830906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:26.164 [2024-11-20 08:50:56.830912] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830916] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6bf750): datao=0, datal=4096, cccid=4 00:17:26.164 [2024-11-20 08:50:56.830922] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x723d40) on tqpair(0x6bf750): expected_datao=0, payload_size=4096 00:17:26.164 [2024-11-20 08:50:56.830927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830936] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830941] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.164 [2024-11-20 08:50:56.830959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.164 [2024-11-20 08:50:56.830963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.830968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723d40) on tqpair=0x6bf750 00:17:26.164 [2024-11-20 08:50:56.830997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.831013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.831024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.831029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6bf750) 00:17:26.164 [2024-11-20 08:50:56.831038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.164 [2024-11-20 08:50:56.831066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723d40, cid 4, qid 0 00:17:26.164 [2024-11-20 08:50:56.831157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:26.164 [2024-11-20 08:50:56.831172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:26.164 [2024-11-20 08:50:56.831177] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.831182] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6bf750): datao=0, datal=4096, cccid=4 00:17:26.164 [2024-11-20 08:50:56.831187] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x723d40) on tqpair(0x6bf750): expected_datao=0, payload_size=4096 00:17:26.164 [2024-11-20 08:50:56.831193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.831201] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.831207] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.831217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.164 [2024-11-20 08:50:56.831225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.164 [2024-11-20 08:50:56.831229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.831234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723d40) on tqpair=0x6bf750 00:17:26.164 [2024-11-20 08:50:56.831245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.831256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.831271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.831279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.831286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.831292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.831298] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:26.164 [2024-11-20 08:50:56.831304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:26.164 [2024-11-20 08:50:56.831311] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:26.164 [2024-11-20 08:50:56.831341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.831349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6bf750) 00:17:26.164 [2024-11-20 08:50:56.831358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.164 [2024-11-20 08:50:56.831367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.831372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.831376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6bf750) 00:17:26.164 [2024-11-20 08:50:56.831383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.164 [2024-11-20 08:50:56.831418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723d40, cid 4, qid 0 00:17:26.164 [2024-11-20 08:50:56.831429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723ec0, cid 5, qid 0 00:17:26.164 [2024-11-20 08:50:56.831525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.164 [2024-11-20 08:50:56.831537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.164 [2024-11-20 08:50:56.831542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.164 [2024-11-20 08:50:56.831547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723d40) on tqpair=0x6bf750 00:17:26.164 [2024-11-20 08:50:56.831555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.164 [2024-11-20 08:50:56.831562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.164 [2024-11-20 08:50:56.831566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.831571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723ec0) on tqpair=0x6bf750 00:17:26.165 [2024-11-20 08:50:56.831584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.831590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6bf750) 00:17:26.165 [2024-11-20 08:50:56.831599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.165 [2024-11-20 08:50:56.831625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723ec0, cid 5, qid 0 00:17:26.165 [2024-11-20 08:50:56.831701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.165 [2024-11-20 08:50:56.831709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.165 [2024-11-20 08:50:56.831714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.831719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723ec0) on tqpair=0x6bf750 00:17:26.165 [2024-11-20 08:50:56.831732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.831738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6bf750) 00:17:26.165 [2024-11-20 08:50:56.831747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.165 [2024-11-20 08:50:56.831771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723ec0, cid 5, qid 0 00:17:26.165 [2024-11-20 08:50:56.835823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.165 [2024-11-20 08:50:56.835848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.165 [2024-11-20 08:50:56.835855] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.835860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723ec0) on tqpair=0x6bf750 00:17:26.165 [2024-11-20 08:50:56.835877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.835884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6bf750) 00:17:26.165 [2024-11-20 08:50:56.835894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.165 [2024-11-20 08:50:56.835926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723ec0, cid 5, qid 0 00:17:26.165 [2024-11-20 08:50:56.836003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.165 [2024-11-20 08:50:56.836012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.165 [2024-11-20 08:50:56.836017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723ec0) on tqpair=0x6bf750 00:17:26.165 [2024-11-20 08:50:56.836048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6bf750) 00:17:26.165 [2024-11-20 08:50:56.836066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.165 [2024-11-20 08:50:56.836076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6bf750) 00:17:26.165 [2024-11-20 08:50:56.836089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.165 [2024-11-20 08:50:56.836098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x6bf750) 00:17:26.165 [2024-11-20 08:50:56.836111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.165 [2024-11-20 08:50:56.836121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6bf750) 00:17:26.165 [2024-11-20 08:50:56.836133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.165 [2024-11-20 08:50:56.836161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723ec0, cid 5, qid 0 00:17:26.165 [2024-11-20 08:50:56.836172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723d40, cid 4, qid 0 00:17:26.165 [2024-11-20 08:50:56.836178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x724040, cid 6, qid 0 00:17:26.165 [2024-11-20 08:50:56.836184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7241c0, cid 7, qid 0 00:17:26.165 [2024-11-20 08:50:56.836370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:26.165 [2024-11-20 08:50:56.836389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:26.165 [2024-11-20 08:50:56.836395] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836400] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6bf750): datao=0, datal=8192, cccid=5 00:17:26.165 [2024-11-20 08:50:56.836405] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x723ec0) on tqpair(0x6bf750): expected_datao=0, payload_size=8192 00:17:26.165 [2024-11-20 08:50:56.836411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836433] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836445] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:26.165 [2024-11-20 08:50:56.836469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:26.165 [2024-11-20 08:50:56.836475] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836479] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6bf750): datao=0, datal=512, cccid=4 00:17:26.165 [2024-11-20 08:50:56.836484] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x723d40) on tqpair(0x6bf750): expected_datao=0, payload_size=512 00:17:26.165 [2024-11-20 08:50:56.836489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836497] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836502] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:26.165 [2024-11-20 08:50:56.836516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:26.165 [2024-11-20 08:50:56.836520] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836538] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6bf750): datao=0, datal=512, cccid=6 00:17:26.165 [2024-11-20 08:50:56.836545] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x724040) on tqpair(0x6bf750): expected_datao=0, payload_size=512 00:17:26.165 [2024-11-20 08:50:56.836550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836558] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836563] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:26.165 [2024-11-20 08:50:56.836576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:26.165 [2024-11-20 08:50:56.836580] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836584] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6bf750): datao=0, datal=4096, cccid=7 00:17:26.165 [2024-11-20 08:50:56.836589] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7241c0) on tqpair(0x6bf750): expected_datao=0, payload_size=4096 00:17:26.165 [2024-11-20 08:50:56.836594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836602] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836607] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.165 [2024-11-20 08:50:56.836625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.165 [2024-11-20 08:50:56.836640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.165 [2024-11-20 08:50:56.836645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723ec0) on tqpair=0x6bf750 00:17:26.165 [2024-11-20 08:50:56.836667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.165 [2024-11-20 08:50:56.836676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.165 [2024-11-20 08:50:56.836680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.166 [2024-11-20 08:50:56.836684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723d40) on tqpair=0x6bf750 00:17:26.166 [2024-11-20 08:50:56.836700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.166 [2024-11-20 08:50:56.836709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.166 [2024-11-20 08:50:56.836713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.166 [2024-11-20 08:50:56.836718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x724040) on tqpair=0x6bf750 00:17:26.166 [2024-11-20 08:50:56.836726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.166 [2024-11-20 08:50:56.836733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.166 [2024-11-20 08:50:56.836738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.166 [2024-11-20 08:50:56.836742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7241c0) on tqpair=0x6bf750 00:17:26.166 ===================================================== 00:17:26.166 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:26.166 ===================================================== 00:17:26.166 Controller Capabilities/Features 00:17:26.166 ================================ 00:17:26.166 Vendor ID: 8086 00:17:26.166 Subsystem Vendor ID: 8086 00:17:26.166 Serial Number: SPDK00000000000001 00:17:26.166 Model Number: SPDK bdev Controller 00:17:26.166 Firmware Version: 25.01 00:17:26.166 Recommended Arb Burst: 6 00:17:26.166 IEEE OUI Identifier: e4 d2 5c 00:17:26.166 Multi-path I/O 00:17:26.166 May have multiple subsystem ports: Yes 00:17:26.166 May have multiple controllers: Yes 00:17:26.166 Associated with SR-IOV VF: No 00:17:26.166 Max Data Transfer Size: 131072 00:17:26.166 Max Number of Namespaces: 32 00:17:26.166 Max Number of I/O Queues: 127 00:17:26.166 NVMe Specification Version (VS): 1.3 00:17:26.166 NVMe Specification Version (Identify): 1.3 00:17:26.166 Maximum Queue Entries: 128 00:17:26.166 Contiguous Queues Required: Yes 00:17:26.166 Arbitration Mechanisms Supported 00:17:26.166 Weighted Round Robin: Not Supported 00:17:26.166 Vendor Specific: Not Supported 00:17:26.166 Reset Timeout: 15000 ms 00:17:26.166 Doorbell Stride: 4 bytes 00:17:26.166 NVM Subsystem Reset: Not Supported 00:17:26.166 Command Sets Supported 00:17:26.166 NVM Command Set: Supported 00:17:26.166 Boot Partition: Not Supported 00:17:26.166 Memory Page Size Minimum: 4096 bytes 00:17:26.166 Memory Page Size Maximum: 4096 bytes 00:17:26.166 Persistent Memory Region: Not Supported 00:17:26.166 Optional Asynchronous Events Supported 00:17:26.166 Namespace Attribute Notices: Supported 00:17:26.166 Firmware Activation Notices: Not Supported 00:17:26.166 ANA Change Notices: Not Supported 00:17:26.166 PLE Aggregate Log Change Notices: Not Supported 00:17:26.166 LBA Status Info Alert Notices: Not Supported 00:17:26.166 EGE Aggregate Log Change Notices: Not Supported 00:17:26.166 Normal NVM Subsystem Shutdown event: Not Supported 00:17:26.166 Zone Descriptor Change Notices: Not Supported 00:17:26.166 Discovery Log Change Notices: Not Supported 00:17:26.166 Controller Attributes 00:17:26.166 128-bit Host Identifier: Supported 00:17:26.166 Non-Operational Permissive Mode: Not Supported 00:17:26.166 NVM Sets: Not Supported 00:17:26.166 Read Recovery Levels: Not Supported 00:17:26.166 Endurance Groups: Not Supported 00:17:26.166 Predictable Latency Mode: Not Supported 00:17:26.166 Traffic Based Keep ALive: Not Supported 00:17:26.166 Namespace Granularity: Not Supported 00:17:26.166 SQ Associations: Not Supported 00:17:26.166 UUID List: Not Supported 00:17:26.166 Multi-Domain Subsystem: Not Supported 00:17:26.166 Fixed Capacity Management: Not Supported 00:17:26.166 Variable Capacity Management: Not Supported 00:17:26.166 Delete Endurance Group: Not Supported 00:17:26.166 Delete NVM Set: Not Supported 00:17:26.166 Extended LBA Formats Supported: Not Supported 00:17:26.166 Flexible Data Placement Supported: Not Supported 00:17:26.166 00:17:26.166 Controller Memory Buffer Support 00:17:26.166 ================================ 00:17:26.166 Supported: No 00:17:26.166 00:17:26.166 Persistent Memory Region Support 00:17:26.166 ================================ 00:17:26.166 Supported: No 00:17:26.166 00:17:26.166 Admin Command Set Attributes 00:17:26.166 ============================ 00:17:26.166 Security Send/Receive: Not Supported 00:17:26.166 Format NVM: Not Supported 00:17:26.166 Firmware Activate/Download: Not Supported 00:17:26.166 Namespace Management: Not Supported 00:17:26.166 Device Self-Test: Not Supported 00:17:26.166 Directives: Not Supported 00:17:26.166 NVMe-MI: Not Supported 00:17:26.166 Virtualization Management: Not Supported 00:17:26.166 Doorbell Buffer Config: Not Supported 00:17:26.166 Get LBA Status Capability: Not Supported 00:17:26.166 Command & Feature Lockdown Capability: Not Supported 00:17:26.166 Abort Command Limit: 4 00:17:26.166 Async Event Request Limit: 4 00:17:26.166 Number of Firmware Slots: N/A 00:17:26.166 Firmware Slot 1 Read-Only: N/A 00:17:26.166 Firmware Activation Without Reset: N/A 00:17:26.166 Multiple Update Detection Support: N/A 00:17:26.166 Firmware Update Granularity: No Information Provided 00:17:26.166 Per-Namespace SMART Log: No 00:17:26.166 Asymmetric Namespace Access Log Page: Not Supported 00:17:26.166 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:26.166 Command Effects Log Page: Supported 00:17:26.166 Get Log Page Extended Data: Supported 00:17:26.166 Telemetry Log Pages: Not Supported 00:17:26.166 Persistent Event Log Pages: Not Supported 00:17:26.166 Supported Log Pages Log Page: May Support 00:17:26.166 Commands Supported & Effects Log Page: Not Supported 00:17:26.166 Feature Identifiers & Effects Log Page:May Support 00:17:26.166 NVMe-MI Commands & Effects Log Page: May Support 00:17:26.166 Data Area 4 for Telemetry Log: Not Supported 00:17:26.166 Error Log Page Entries Supported: 128 00:17:26.166 Keep Alive: Supported 00:17:26.166 Keep Alive Granularity: 10000 ms 00:17:26.166 00:17:26.166 NVM Command Set Attributes 00:17:26.166 ========================== 00:17:26.166 Submission Queue Entry Size 00:17:26.166 Max: 64 00:17:26.166 Min: 64 00:17:26.166 Completion Queue Entry Size 00:17:26.166 Max: 16 00:17:26.166 Min: 16 00:17:26.166 Number of Namespaces: 32 00:17:26.166 Compare Command: Supported 00:17:26.166 Write Uncorrectable Command: Not Supported 00:17:26.166 Dataset Management Command: Supported 00:17:26.166 Write Zeroes Command: Supported 00:17:26.166 Set Features Save Field: Not Supported 00:17:26.166 Reservations: Supported 00:17:26.166 Timestamp: Not Supported 00:17:26.166 Copy: Supported 00:17:26.166 Volatile Write Cache: Present 00:17:26.166 Atomic Write Unit (Normal): 1 00:17:26.166 Atomic Write Unit (PFail): 1 00:17:26.166 Atomic Compare & Write Unit: 1 00:17:26.166 Fused Compare & Write: Supported 00:17:26.166 Scatter-Gather List 00:17:26.166 SGL Command Set: Supported 00:17:26.166 SGL Keyed: Supported 00:17:26.166 SGL Bit Bucket Descriptor: Not Supported 00:17:26.166 SGL Metadata Pointer: Not Supported 00:17:26.166 Oversized SGL: Not Supported 00:17:26.166 SGL Metadata Address: Not Supported 00:17:26.166 SGL Offset: Supported 00:17:26.166 Transport SGL Data Block: Not Supported 00:17:26.166 Replay Protected Memory Block: Not Supported 00:17:26.166 00:17:26.166 Firmware Slot Information 00:17:26.166 ========================= 00:17:26.167 Active slot: 1 00:17:26.167 Slot 1 Firmware Revision: 25.01 00:17:26.167 00:17:26.167 00:17:26.167 Commands Supported and Effects 00:17:26.167 ============================== 00:17:26.167 Admin Commands 00:17:26.167 -------------- 00:17:26.167 Get Log Page (02h): Supported 00:17:26.167 Identify (06h): Supported 00:17:26.167 Abort (08h): Supported 00:17:26.167 Set Features (09h): Supported 00:17:26.167 Get Features (0Ah): Supported 00:17:26.167 Asynchronous Event Request (0Ch): Supported 00:17:26.167 Keep Alive (18h): Supported 00:17:26.167 I/O Commands 00:17:26.167 ------------ 00:17:26.167 Flush (00h): Supported LBA-Change 00:17:26.167 Write (01h): Supported LBA-Change 00:17:26.167 Read (02h): Supported 00:17:26.167 Compare (05h): Supported 00:17:26.167 Write Zeroes (08h): Supported LBA-Change 00:17:26.167 Dataset Management (09h): Supported LBA-Change 00:17:26.167 Copy (19h): Supported LBA-Change 00:17:26.167 00:17:26.167 Error Log 00:17:26.167 ========= 00:17:26.167 00:17:26.167 Arbitration 00:17:26.167 =========== 00:17:26.167 Arbitration Burst: 1 00:17:26.167 00:17:26.167 Power Management 00:17:26.167 ================ 00:17:26.167 Number of Power States: 1 00:17:26.167 Current Power State: Power State #0 00:17:26.167 Power State #0: 00:17:26.167 Max Power: 0.00 W 00:17:26.167 Non-Operational State: Operational 00:17:26.167 Entry Latency: Not Reported 00:17:26.167 Exit Latency: Not Reported 00:17:26.167 Relative Read Throughput: 0 00:17:26.167 Relative Read Latency: 0 00:17:26.167 Relative Write Throughput: 0 00:17:26.167 Relative Write Latency: 0 00:17:26.167 Idle Power: Not Reported 00:17:26.167 Active Power: Not Reported 00:17:26.167 Non-Operational Permissive Mode: Not Supported 00:17:26.167 00:17:26.167 Health Information 00:17:26.167 ================== 00:17:26.167 Critical Warnings: 00:17:26.167 Available Spare Space: OK 00:17:26.167 Temperature: OK 00:17:26.167 Device Reliability: OK 00:17:26.167 Read Only: No 00:17:26.167 Volatile Memory Backup: OK 00:17:26.167 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:26.167 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:26.167 Available Spare: 0% 00:17:26.167 Available Spare Threshold: 0% 00:17:26.167 Life Percentage Used:[2024-11-20 08:50:56.836881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.167 [2024-11-20 08:50:56.836891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6bf750) 00:17:26.167 [2024-11-20 08:50:56.836901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.167 [2024-11-20 08:50:56.836932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7241c0, cid 7, qid 0 00:17:26.167 [2024-11-20 08:50:56.837011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.167 [2024-11-20 08:50:56.837019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.167 [2024-11-20 08:50:56.837024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.167 [2024-11-20 08:50:56.837029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7241c0) on tqpair=0x6bf750 00:17:26.167 [2024-11-20 08:50:56.837078] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:26.167 [2024-11-20 08:50:56.837094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723740) on tqpair=0x6bf750 00:17:26.167 [2024-11-20 08:50:56.837102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.167 [2024-11-20 08:50:56.837109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7238c0) on tqpair=0x6bf750 00:17:26.167 [2024-11-20 08:50:56.837115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.167 [2024-11-20 08:50:56.837121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723a40) on tqpair=0x6bf750 00:17:26.167 [2024-11-20 08:50:56.837126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.167 [2024-11-20 08:50:56.837132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.167 [2024-11-20 08:50:56.837138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.167 [2024-11-20 08:50:56.837149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.167 [2024-11-20 08:50:56.837155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.167 [2024-11-20 08:50:56.837160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.167 [2024-11-20 08:50:56.837169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.167 [2024-11-20 08:50:56.837198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.167 [2024-11-20 08:50:56.837275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.168 [2024-11-20 08:50:56.837284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.168 [2024-11-20 08:50:56.837288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.168 [2024-11-20 08:50:56.837303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.168 [2024-11-20 08:50:56.837321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.168 [2024-11-20 08:50:56.837349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.168 [2024-11-20 08:50:56.837459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.168 [2024-11-20 08:50:56.837475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.168 [2024-11-20 08:50:56.837481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.168 [2024-11-20 08:50:56.837492] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:26.168 [2024-11-20 08:50:56.837498] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:26.168 [2024-11-20 08:50:56.837511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.168 [2024-11-20 08:50:56.837531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.168 [2024-11-20 08:50:56.837557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.168 [2024-11-20 08:50:56.837628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.168 [2024-11-20 08:50:56.837637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.168 [2024-11-20 08:50:56.837641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.168 [2024-11-20 08:50:56.837660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.168 [2024-11-20 08:50:56.837679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.168 [2024-11-20 08:50:56.837703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.168 [2024-11-20 08:50:56.837780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.168 [2024-11-20 08:50:56.837788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.168 [2024-11-20 08:50:56.837793] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.168 [2024-11-20 08:50:56.837829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.168 [2024-11-20 08:50:56.837848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.168 [2024-11-20 08:50:56.837875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.168 [2024-11-20 08:50:56.837953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.168 [2024-11-20 08:50:56.837961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.168 [2024-11-20 08:50:56.837966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.168 [2024-11-20 08:50:56.837983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.837994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.168 [2024-11-20 08:50:56.838002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.168 [2024-11-20 08:50:56.838026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.168 [2024-11-20 08:50:56.838105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.168 [2024-11-20 08:50:56.838114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.168 [2024-11-20 08:50:56.838118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.168 [2024-11-20 08:50:56.838136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.168 [2024-11-20 08:50:56.838155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.168 [2024-11-20 08:50:56.838178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.168 [2024-11-20 08:50:56.838252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.168 [2024-11-20 08:50:56.838261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.168 [2024-11-20 08:50:56.838265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.168 [2024-11-20 08:50:56.838283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.168 [2024-11-20 08:50:56.838302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.168 [2024-11-20 08:50:56.838326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.168 [2024-11-20 08:50:56.838393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.168 [2024-11-20 08:50:56.838401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.168 [2024-11-20 08:50:56.838406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.168 [2024-11-20 08:50:56.838423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.168 [2024-11-20 08:50:56.838451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.168 [2024-11-20 08:50:56.838485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.168 [2024-11-20 08:50:56.838565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.168 [2024-11-20 08:50:56.838580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.168 [2024-11-20 08:50:56.838586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.168 [2024-11-20 08:50:56.838605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.168 [2024-11-20 08:50:56.838624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.168 [2024-11-20 08:50:56.838649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.168 [2024-11-20 08:50:56.838721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.168 [2024-11-20 08:50:56.838729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.168 [2024-11-20 08:50:56.838734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.168 [2024-11-20 08:50:56.838751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.168 [2024-11-20 08:50:56.838762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.168 [2024-11-20 08:50:56.838770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.168 [2024-11-20 08:50:56.838794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.168 [2024-11-20 08:50:56.842844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.169 [2024-11-20 08:50:56.842857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.169 [2024-11-20 08:50:56.842862] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.169 [2024-11-20 08:50:56.842867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.169 [2024-11-20 08:50:56.842883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:26.169 [2024-11-20 08:50:56.842891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:26.169 [2024-11-20 08:50:56.842895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6bf750) 00:17:26.169 [2024-11-20 08:50:56.842905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.169 [2024-11-20 08:50:56.842936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x723bc0, cid 3, qid 0 00:17:26.169 [2024-11-20 08:50:56.843014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:26.169 [2024-11-20 08:50:56.843022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:26.169 [2024-11-20 08:50:56.843027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:26.169 [2024-11-20 08:50:56.843031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x723bc0) on tqpair=0x6bf750 00:17:26.169 [2024-11-20 08:50:56.843041] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:17:26.169 0% 00:17:26.169 Data Units Read: 0 00:17:26.169 Data Units Written: 0 00:17:26.169 Host Read Commands: 0 00:17:26.169 Host Write Commands: 0 00:17:26.169 Controller Busy Time: 0 minutes 00:17:26.169 Power Cycles: 0 00:17:26.169 Power On Hours: 0 hours 00:17:26.169 Unsafe Shutdowns: 0 00:17:26.169 Unrecoverable Media Errors: 0 00:17:26.169 Lifetime Error Log Entries: 0 00:17:26.169 Warning Temperature Time: 0 minutes 00:17:26.169 Critical Temperature Time: 0 minutes 00:17:26.169 00:17:26.169 Number of Queues 00:17:26.169 ================ 00:17:26.169 Number of I/O Submission Queues: 127 00:17:26.169 Number of I/O Completion Queues: 127 00:17:26.169 00:17:26.169 Active Namespaces 00:17:26.169 ================= 00:17:26.169 Namespace ID:1 00:17:26.169 Error Recovery Timeout: Unlimited 00:17:26.169 Command Set Identifier: NVM (00h) 00:17:26.169 Deallocate: Supported 00:17:26.169 Deallocated/Unwritten Error: Not Supported 00:17:26.169 Deallocated Read Value: Unknown 00:17:26.169 Deallocate in Write Zeroes: Not Supported 00:17:26.169 Deallocated Guard Field: 0xFFFF 00:17:26.169 Flush: Supported 00:17:26.169 Reservation: Supported 00:17:26.169 Namespace Sharing Capabilities: Multiple Controllers 00:17:26.169 Size (in LBAs): 131072 (0GiB) 00:17:26.169 Capacity (in LBAs): 131072 (0GiB) 00:17:26.169 Utilization (in LBAs): 131072 (0GiB) 00:17:26.169 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:26.169 EUI64: ABCDEF0123456789 00:17:26.169 UUID: f6f59be0-dfe1-4c31-9f82-771fc2a7192e 00:17:26.169 Thin Provisioning: Not Supported 00:17:26.169 Per-NS Atomic Units: Yes 00:17:26.169 Atomic Boundary Size (Normal): 0 00:17:26.169 Atomic Boundary Size (PFail): 0 00:17:26.169 Atomic Boundary Offset: 0 00:17:26.169 Maximum Single Source Range Length: 65535 00:17:26.169 Maximum Copy Length: 65535 00:17:26.169 Maximum Source Range Count: 1 00:17:26.169 NGUID/EUI64 Never Reused: No 00:17:26.169 Namespace Write Protected: No 00:17:26.169 Number of LBA Formats: 1 00:17:26.169 Current LBA Format: LBA Format #00 00:17:26.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:26.169 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:26.169 rmmod nvme_tcp 00:17:26.169 rmmod nvme_fabrics 00:17:26.169 rmmod nvme_keyring 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74624 ']' 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74624 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74624 ']' 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74624 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.169 08:50:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74624 00:17:26.169 killing process with pid 74624 00:17:26.169 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.169 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.169 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74624' 00:17:26.169 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74624 00:17:26.169 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74624 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:26.738 00:17:26.738 real 0m3.134s 00:17:26.738 user 0m7.894s 00:17:26.738 sys 0m0.853s 00:17:26.738 ************************************ 00:17:26.738 END TEST nvmf_identify 00:17:26.738 ************************************ 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.738 08:50:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.001 ************************************ 00:17:27.001 START TEST nvmf_perf 00:17:27.001 ************************************ 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:27.001 * Looking for test storage... 00:17:27.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:27.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.001 --rc genhtml_branch_coverage=1 00:17:27.001 --rc genhtml_function_coverage=1 00:17:27.001 --rc genhtml_legend=1 00:17:27.001 --rc geninfo_all_blocks=1 00:17:27.001 --rc geninfo_unexecuted_blocks=1 00:17:27.001 00:17:27.001 ' 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:27.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.001 --rc genhtml_branch_coverage=1 00:17:27.001 --rc genhtml_function_coverage=1 00:17:27.001 --rc genhtml_legend=1 00:17:27.001 --rc geninfo_all_blocks=1 00:17:27.001 --rc geninfo_unexecuted_blocks=1 00:17:27.001 00:17:27.001 ' 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:27.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.001 --rc genhtml_branch_coverage=1 00:17:27.001 --rc genhtml_function_coverage=1 00:17:27.001 --rc genhtml_legend=1 00:17:27.001 --rc geninfo_all_blocks=1 00:17:27.001 --rc geninfo_unexecuted_blocks=1 00:17:27.001 00:17:27.001 ' 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:27.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.001 --rc genhtml_branch_coverage=1 00:17:27.001 --rc genhtml_function_coverage=1 00:17:27.001 --rc genhtml_legend=1 00:17:27.001 --rc geninfo_all_blocks=1 00:17:27.001 --rc geninfo_unexecuted_blocks=1 00:17:27.001 00:17:27.001 ' 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.001 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:27.002 Cannot find device "nvmf_init_br" 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:27.002 Cannot find device "nvmf_init_br2" 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:27.002 Cannot find device "nvmf_tgt_br" 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:27.002 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.264 Cannot find device "nvmf_tgt_br2" 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:27.264 Cannot find device "nvmf_init_br" 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:27.264 Cannot find device "nvmf_init_br2" 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:27.264 Cannot find device "nvmf_tgt_br" 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:27.264 Cannot find device "nvmf_tgt_br2" 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:27.264 Cannot find device "nvmf_br" 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:27.264 Cannot find device "nvmf_init_if" 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:27.264 08:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:27.264 Cannot find device "nvmf_init_if2" 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:27.264 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:27.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:27.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:17:27.524 00:17:27.524 --- 10.0.0.3 ping statistics --- 00:17:27.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.524 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:27.524 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:27.524 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:17:27.524 00:17:27.524 --- 10.0.0.4 ping statistics --- 00:17:27.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.524 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:27.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:27.524 00:17:27.524 --- 10.0.0.1 ping statistics --- 00:17:27.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.524 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:27.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:27.524 00:17:27.524 --- 10.0.0.2 ping statistics --- 00:17:27.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.524 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74886 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74886 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74886 ']' 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.524 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:27.524 [2024-11-20 08:50:58.377232] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:27.524 [2024-11-20 08:50:58.377336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.783 [2024-11-20 08:50:58.531481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.783 [2024-11-20 08:50:58.626748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.783 [2024-11-20 08:50:58.626869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.783 [2024-11-20 08:50:58.626884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.783 [2024-11-20 08:50:58.626895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.783 [2024-11-20 08:50:58.626905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.783 [2024-11-20 08:50:58.628481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.783 [2024-11-20 08:50:58.628587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.783 [2024-11-20 08:50:58.628672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.783 [2024-11-20 08:50:58.628672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.043 [2024-11-20 08:50:58.708222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:28.043 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.043 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:28.043 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.043 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.043 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:28.043 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.043 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:28.043 08:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:28.611 08:50:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:28.611 08:50:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:28.611 08:50:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:28.611 08:50:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:29.179 08:50:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:29.179 08:50:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:29.179 08:50:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:29.179 08:50:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:29.179 08:50:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:29.437 [2024-11-20 08:51:00.107356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.438 08:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:29.696 08:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:29.696 08:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:29.955 08:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:29.955 08:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:30.215 08:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:30.474 [2024-11-20 08:51:01.185774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:30.474 08:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:30.732 08:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:30.732 08:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:30.732 08:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:30.732 08:51:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:31.670 Initializing NVMe Controllers 00:17:31.670 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:31.670 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:31.670 Initialization complete. Launching workers. 00:17:31.670 ======================================================== 00:17:31.670 Latency(us) 00:17:31.670 Device Information : IOPS MiB/s Average min max 00:17:31.670 PCIE (0000:00:10.0) NSID 1 from core 0: 23936.00 93.50 1336.38 371.31 8000.79 00:17:31.670 ======================================================== 00:17:31.670 Total : 23936.00 93.50 1336.38 371.31 8000.79 00:17:31.670 00:17:31.929 08:51:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:33.319 Initializing NVMe Controllers 00:17:33.319 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:33.319 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:33.319 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:33.319 Initialization complete. Launching workers. 00:17:33.319 ======================================================== 00:17:33.319 Latency(us) 00:17:33.319 Device Information : IOPS MiB/s Average min max 00:17:33.319 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3695.00 14.43 269.18 103.60 6138.00 00:17:33.319 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8070.92 5019.15 12021.94 00:17:33.319 ======================================================== 00:17:33.319 Total : 3820.00 14.92 524.47 103.60 12021.94 00:17:33.319 00:17:33.319 08:51:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:34.696 Initializing NVMe Controllers 00:17:34.696 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:34.696 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:34.696 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:34.696 Initialization complete. Launching workers. 00:17:34.696 ======================================================== 00:17:34.696 Latency(us) 00:17:34.696 Device Information : IOPS MiB/s Average min max 00:17:34.696 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8671.99 33.87 3693.34 575.90 10349.77 00:17:34.696 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4007.00 15.65 8030.45 4798.67 16633.81 00:17:34.696 ======================================================== 00:17:34.696 Total : 12678.99 49.53 5064.01 575.90 16633.81 00:17:34.696 00:17:34.696 08:51:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:34.696 08:51:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:37.268 Initializing NVMe Controllers 00:17:37.268 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:37.268 Controller IO queue size 128, less than required. 00:17:37.268 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:37.268 Controller IO queue size 128, less than required. 00:17:37.268 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:37.268 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:37.268 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:37.268 Initialization complete. Launching workers. 00:17:37.268 ======================================================== 00:17:37.269 Latency(us) 00:17:37.269 Device Information : IOPS MiB/s Average min max 00:17:37.269 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1603.45 400.86 81245.01 45656.19 134355.85 00:17:37.269 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 638.48 159.62 206741.01 71082.51 342273.50 00:17:37.269 ======================================================== 00:17:37.269 Total : 2241.93 560.48 116985.06 45656.19 342273.50 00:17:37.269 00:17:37.269 08:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:37.526 Initializing NVMe Controllers 00:17:37.526 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:37.526 Controller IO queue size 128, less than required. 00:17:37.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:37.526 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:37.526 Controller IO queue size 128, less than required. 00:17:37.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:37.526 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:37.526 WARNING: Some requested NVMe devices were skipped 00:17:37.526 No valid NVMe controllers or AIO or URING devices found 00:17:37.526 08:51:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:40.057 Initializing NVMe Controllers 00:17:40.057 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:40.057 Controller IO queue size 128, less than required. 00:17:40.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:40.057 Controller IO queue size 128, less than required. 00:17:40.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:40.057 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:40.057 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:40.057 Initialization complete. Launching workers. 00:17:40.057 00:17:40.057 ==================== 00:17:40.057 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:40.057 TCP transport: 00:17:40.057 polls: 8561 00:17:40.057 idle_polls: 5074 00:17:40.057 sock_completions: 3487 00:17:40.057 nvme_completions: 5881 00:17:40.057 submitted_requests: 8832 00:17:40.058 queued_requests: 1 00:17:40.058 00:17:40.058 ==================== 00:17:40.058 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:40.058 TCP transport: 00:17:40.058 polls: 8670 00:17:40.058 idle_polls: 4909 00:17:40.058 sock_completions: 3761 00:17:40.058 nvme_completions: 6497 00:17:40.058 submitted_requests: 9760 00:17:40.058 queued_requests: 1 00:17:40.058 ======================================================== 00:17:40.058 Latency(us) 00:17:40.058 Device Information : IOPS MiB/s Average min max 00:17:40.058 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1468.40 367.10 89753.98 42750.62 142740.22 00:17:40.058 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1622.24 405.56 79172.11 42091.62 123110.02 00:17:40.058 ======================================================== 00:17:40.058 Total : 3090.64 772.66 84199.70 42091.62 142740.22 00:17:40.058 00:17:40.058 08:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:40.058 08:51:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.317 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:40.317 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:40.317 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:40.317 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:40.317 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:40.317 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.317 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:40.317 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.317 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.317 rmmod nvme_tcp 00:17:40.317 rmmod nvme_fabrics 00:17:40.317 rmmod nvme_keyring 00:17:40.317 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74886 ']' 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74886 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74886 ']' 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74886 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74886 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.576 killing process with pid 74886 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74886' 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74886 00:17:40.576 08:51:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74886 00:17:41.144 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:41.144 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:41.144 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:41.144 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:17:41.144 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:17:41.144 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:41.144 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:17:41.144 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.144 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:41.144 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:41.144 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:17:41.403 00:17:41.403 real 0m14.618s 00:17:41.403 user 0m52.553s 00:17:41.403 sys 0m4.203s 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:41.403 ************************************ 00:17:41.403 END TEST nvmf_perf 00:17:41.403 ************************************ 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.403 08:51:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.664 ************************************ 00:17:41.664 START TEST nvmf_fio_host 00:17:41.664 ************************************ 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:41.664 * Looking for test storage... 00:17:41.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:41.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.664 --rc genhtml_branch_coverage=1 00:17:41.664 --rc genhtml_function_coverage=1 00:17:41.664 --rc genhtml_legend=1 00:17:41.664 --rc geninfo_all_blocks=1 00:17:41.664 --rc geninfo_unexecuted_blocks=1 00:17:41.664 00:17:41.664 ' 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:41.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.664 --rc genhtml_branch_coverage=1 00:17:41.664 --rc genhtml_function_coverage=1 00:17:41.664 --rc genhtml_legend=1 00:17:41.664 --rc geninfo_all_blocks=1 00:17:41.664 --rc geninfo_unexecuted_blocks=1 00:17:41.664 00:17:41.664 ' 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:41.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.664 --rc genhtml_branch_coverage=1 00:17:41.664 --rc genhtml_function_coverage=1 00:17:41.664 --rc genhtml_legend=1 00:17:41.664 --rc geninfo_all_blocks=1 00:17:41.664 --rc geninfo_unexecuted_blocks=1 00:17:41.664 00:17:41.664 ' 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:41.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.664 --rc genhtml_branch_coverage=1 00:17:41.664 --rc genhtml_function_coverage=1 00:17:41.664 --rc genhtml_legend=1 00:17:41.664 --rc geninfo_all_blocks=1 00:17:41.664 --rc geninfo_unexecuted_blocks=1 00:17:41.664 00:17:41.664 ' 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.664 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:41.665 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:41.665 Cannot find device "nvmf_init_br" 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:41.665 Cannot find device "nvmf_init_br2" 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:41.665 Cannot find device "nvmf_tgt_br" 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:17:41.665 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.925 Cannot find device "nvmf_tgt_br2" 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:41.926 Cannot find device "nvmf_init_br" 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:41.926 Cannot find device "nvmf_init_br2" 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:41.926 Cannot find device "nvmf_tgt_br" 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:41.926 Cannot find device "nvmf_tgt_br2" 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:41.926 Cannot find device "nvmf_br" 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:41.926 Cannot find device "nvmf_init_if" 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:41.926 Cannot find device "nvmf_init_if2" 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:41.926 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.185 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:42.185 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:42.185 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:42.185 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:42.185 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:42.185 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:42.185 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:42.185 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:42.185 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:42.185 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:17:42.185 00:17:42.185 --- 10.0.0.3 ping statistics --- 00:17:42.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.185 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:42.185 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:42.185 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:42.185 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:17:42.185 00:17:42.185 --- 10.0.0.4 ping statistics --- 00:17:42.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.185 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:42.185 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:42.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:42.186 00:17:42.186 --- 10.0.0.1 ping statistics --- 00:17:42.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.186 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:42.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:42.186 00:17:42.186 --- 10.0.0.2 ping statistics --- 00:17:42.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.186 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75353 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75353 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 75353 ']' 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.186 08:51:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.186 [2024-11-20 08:51:12.989444] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:42.186 [2024-11-20 08:51:12.989548] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.496 [2024-11-20 08:51:13.142038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:42.496 [2024-11-20 08:51:13.214945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.496 [2024-11-20 08:51:13.215006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.496 [2024-11-20 08:51:13.215017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.496 [2024-11-20 08:51:13.215026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.496 [2024-11-20 08:51:13.215034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.496 [2024-11-20 08:51:13.216314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.496 [2024-11-20 08:51:13.216405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.496 [2024-11-20 08:51:13.216457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.496 [2024-11-20 08:51:13.216457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.497 [2024-11-20 08:51:13.288364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:42.497 08:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.497 08:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:42.497 08:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:42.755 [2024-11-20 08:51:13.656302] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.014 08:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:43.014 08:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:43.014 08:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.014 08:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:43.272 Malloc1 00:17:43.272 08:51:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:43.531 08:51:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:43.791 08:51:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:44.049 [2024-11-20 08:51:14.902185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:44.049 08:51:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:44.308 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:44.309 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:44.309 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:44.309 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:44.309 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:44.309 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:44.567 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:44.567 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:44.567 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:44.567 08:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:44.567 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:44.567 fio-3.35 00:17:44.567 Starting 1 thread 00:17:47.102 00:17:47.102 test: (groupid=0, jobs=1): err= 0: pid=75428: Wed Nov 20 08:51:17 2024 00:17:47.102 read: IOPS=8834, BW=34.5MiB/s (36.2MB/s)(69.3MiB/2007msec) 00:17:47.102 slat (usec): min=2, max=268, avg= 2.54, stdev= 2.60 00:17:47.102 clat (usec): min=1890, max=13690, avg=7536.14, stdev=529.69 00:17:47.102 lat (usec): min=1931, max=13692, avg=7538.68, stdev=529.42 00:17:47.102 clat percentiles (usec): 00:17:47.102 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 6980], 20.00th=[ 7177], 00:17:47.102 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7504], 60.00th=[ 7635], 00:17:47.102 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8094], 95.00th=[ 8291], 00:17:47.102 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[11469], 99.95th=[12911], 00:17:47.102 | 99.99th=[13566] 00:17:47.102 bw ( KiB/s): min=34616, max=35656, per=99.99%, avg=35332.00, stdev=483.96, samples=4 00:17:47.102 iops : min= 8654, max= 8914, avg=8833.00, stdev=120.99, samples=4 00:17:47.102 write: IOPS=8848, BW=34.6MiB/s (36.2MB/s)(69.4MiB/2007msec); 0 zone resets 00:17:47.102 slat (usec): min=2, max=185, avg= 2.62, stdev= 1.75 00:17:47.102 clat (usec): min=1788, max=13407, avg=6883.60, stdev=491.78 00:17:47.102 lat (usec): min=1799, max=13409, avg=6886.23, stdev=491.65 00:17:47.102 clat percentiles (usec): 00:17:47.102 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:17:47.102 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6849], 60.00th=[ 6980], 00:17:47.102 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7373], 95.00th=[ 7570], 00:17:47.102 | 99.00th=[ 7963], 99.50th=[ 8356], 99.90th=[10945], 99.95th=[12649], 00:17:47.102 | 99.99th=[13435] 00:17:47.102 bw ( KiB/s): min=34872, max=35688, per=99.99%, avg=35392.00, stdev=374.21, samples=4 00:17:47.102 iops : min= 8716, max= 8922, avg=8848.00, stdev=94.45, samples=4 00:17:47.102 lat (msec) : 2=0.01%, 4=0.14%, 10=99.66%, 20=0.19% 00:17:47.102 cpu : usr=70.99%, sys=21.54%, ctx=25, majf=0, minf=7 00:17:47.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:47.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:47.102 issued rwts: total=17730,17759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:47.102 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:47.102 00:17:47.102 Run status group 0 (all jobs): 00:17:47.102 READ: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.6MB), run=2007-2007msec 00:17:47.102 WRITE: bw=34.6MiB/s (36.2MB/s), 34.6MiB/s-34.6MiB/s (36.2MB/s-36.2MB/s), io=69.4MiB (72.7MB), run=2007-2007msec 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:47.102 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:47.103 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:47.103 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:47.103 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:47.103 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:47.103 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:47.103 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:47.103 08:51:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:47.103 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:47.103 fio-3.35 00:17:47.103 Starting 1 thread 00:17:49.636 00:17:49.636 test: (groupid=0, jobs=1): err= 0: pid=75477: Wed Nov 20 08:51:20 2024 00:17:49.636 read: IOPS=8034, BW=126MiB/s (132MB/s)(252MiB/2007msec) 00:17:49.636 slat (usec): min=3, max=131, avg= 4.38, stdev= 1.89 00:17:49.636 clat (usec): min=2635, max=17429, avg=8737.36, stdev=2500.74 00:17:49.636 lat (usec): min=2639, max=17433, avg=8741.74, stdev=2500.86 00:17:49.636 clat percentiles (usec): 00:17:49.636 | 1.00th=[ 4424], 5.00th=[ 5276], 10.00th=[ 5735], 20.00th=[ 6456], 00:17:49.636 | 30.00th=[ 7177], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9110], 00:17:49.636 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[11994], 95.00th=[13304], 00:17:49.636 | 99.00th=[15664], 99.50th=[16188], 99.90th=[16909], 99.95th=[16909], 00:17:49.636 | 99.99th=[17433] 00:17:49.636 bw ( KiB/s): min=61792, max=69344, per=50.32%, avg=64696.00, stdev=3246.53, samples=4 00:17:49.636 iops : min= 3862, max= 4334, avg=4043.50, stdev=202.91, samples=4 00:17:49.636 write: IOPS=4658, BW=72.8MiB/s (76.3MB/s)(133MiB/1824msec); 0 zone resets 00:17:49.636 slat (usec): min=36, max=202, avg=39.75, stdev= 5.58 00:17:49.636 clat (usec): min=4417, max=21480, avg=12883.67, stdev=2167.82 00:17:49.636 lat (usec): min=4454, max=21517, avg=12923.42, stdev=2169.32 00:17:49.636 clat percentiles (usec): 00:17:49.636 | 1.00th=[ 8356], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11076], 00:17:49.636 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:17:49.636 | 70.00th=[13698], 80.00th=[14484], 90.00th=[15795], 95.00th=[16909], 00:17:49.636 | 99.00th=[19268], 99.50th=[19792], 99.90th=[21103], 99.95th=[21365], 00:17:49.636 | 99.99th=[21365] 00:17:49.636 bw ( KiB/s): min=64032, max=71872, per=90.34%, avg=67344.00, stdev=3288.22, samples=4 00:17:49.636 iops : min= 4002, max= 4492, avg=4209.00, stdev=205.51, samples=4 00:17:49.636 lat (msec) : 4=0.17%, 10=48.52%, 20=51.15%, 50=0.16% 00:17:49.636 cpu : usr=76.07%, sys=17.50%, ctx=54, majf=0, minf=8 00:17:49.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:17:49.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:49.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:49.636 issued rwts: total=16126,8498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:49.636 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:49.636 00:17:49.636 Run status group 0 (all jobs): 00:17:49.636 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=252MiB (264MB), run=2007-2007msec 00:17:49.636 WRITE: bw=72.8MiB/s (76.3MB/s), 72.8MiB/s-72.8MiB/s (76.3MB/s-76.3MB/s), io=133MiB (139MB), run=1824-1824msec 00:17:49.636 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:49.895 rmmod nvme_tcp 00:17:49.895 rmmod nvme_fabrics 00:17:49.895 rmmod nvme_keyring 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75353 ']' 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75353 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 75353 ']' 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 75353 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75353 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.895 killing process with pid 75353 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75353' 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 75353 00:17:49.895 08:51:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 75353 00:17:50.155 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:50.155 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:50.155 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:50.155 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:17:50.155 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:50.155 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:17:50.155 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:50.155 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:50.155 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:50.155 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:50.413 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:50.413 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:50.413 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.413 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:50.413 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:17:50.414 ************************************ 00:17:50.414 END TEST nvmf_fio_host 00:17:50.414 ************************************ 00:17:50.414 00:17:50.414 real 0m8.963s 00:17:50.414 user 0m35.585s 00:17:50.414 sys 0m2.535s 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.414 08:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.674 ************************************ 00:17:50.674 START TEST nvmf_failover 00:17:50.674 ************************************ 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:50.674 * Looking for test storage... 00:17:50.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:50.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.674 --rc genhtml_branch_coverage=1 00:17:50.674 --rc genhtml_function_coverage=1 00:17:50.674 --rc genhtml_legend=1 00:17:50.674 --rc geninfo_all_blocks=1 00:17:50.674 --rc geninfo_unexecuted_blocks=1 00:17:50.674 00:17:50.674 ' 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:50.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.674 --rc genhtml_branch_coverage=1 00:17:50.674 --rc genhtml_function_coverage=1 00:17:50.674 --rc genhtml_legend=1 00:17:50.674 --rc geninfo_all_blocks=1 00:17:50.674 --rc geninfo_unexecuted_blocks=1 00:17:50.674 00:17:50.674 ' 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:50.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.674 --rc genhtml_branch_coverage=1 00:17:50.674 --rc genhtml_function_coverage=1 00:17:50.674 --rc genhtml_legend=1 00:17:50.674 --rc geninfo_all_blocks=1 00:17:50.674 --rc geninfo_unexecuted_blocks=1 00:17:50.674 00:17:50.674 ' 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:50.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.674 --rc genhtml_branch_coverage=1 00:17:50.674 --rc genhtml_function_coverage=1 00:17:50.674 --rc genhtml_legend=1 00:17:50.674 --rc geninfo_all_blocks=1 00:17:50.674 --rc geninfo_unexecuted_blocks=1 00:17:50.674 00:17:50.674 ' 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.674 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:50.675 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:50.675 Cannot find device "nvmf_init_br" 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:50.675 Cannot find device "nvmf_init_br2" 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:50.675 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:50.935 Cannot find device "nvmf_tgt_br" 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.935 Cannot find device "nvmf_tgt_br2" 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:50.935 Cannot find device "nvmf_init_br" 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:50.935 Cannot find device "nvmf_init_br2" 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:50.935 Cannot find device "nvmf_tgt_br" 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:50.935 Cannot find device "nvmf_tgt_br2" 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:50.935 Cannot find device "nvmf_br" 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:50.935 Cannot find device "nvmf_init_if" 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:50.935 Cannot find device "nvmf_init_if2" 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:50.935 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:50.936 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:50.936 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:50.936 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:51.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:17:51.195 00:17:51.195 --- 10.0.0.3 ping statistics --- 00:17:51.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.195 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:51.195 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:51.195 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:17:51.195 00:17:51.195 --- 10.0.0.4 ping statistics --- 00:17:51.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.195 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:51.195 00:17:51.195 --- 10.0.0.1 ping statistics --- 00:17:51.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.195 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:51.195 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:51.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:17:51.195 00:17:51.195 --- 10.0.0.2 ping statistics --- 00:17:51.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.196 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75743 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75743 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75743 ']' 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.196 08:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:51.196 [2024-11-20 08:51:22.011783] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:51.196 [2024-11-20 08:51:22.011915] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.455 [2024-11-20 08:51:22.166852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:51.455 [2024-11-20 08:51:22.250663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.455 [2024-11-20 08:51:22.250747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.455 [2024-11-20 08:51:22.250762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.455 [2024-11-20 08:51:22.250772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.455 [2024-11-20 08:51:22.250782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.455 [2024-11-20 08:51:22.252362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.455 [2024-11-20 08:51:22.252504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.455 [2024-11-20 08:51:22.252509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.455 [2024-11-20 08:51:22.328607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:52.392 08:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.392 08:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:52.392 08:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:52.392 08:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:52.392 08:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:52.392 08:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.392 08:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:52.652 [2024-11-20 08:51:23.377274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.652 08:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:52.911 Malloc0 00:17:52.911 08:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:53.170 08:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:53.429 08:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:53.689 [2024-11-20 08:51:24.578740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:53.689 08:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:53.949 [2024-11-20 08:51:24.834908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:53.949 08:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:54.208 [2024-11-20 08:51:25.095118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:17:54.208 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:54.208 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75811 00:17:54.208 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.208 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75811 /var/tmp/bdevperf.sock 00:17:54.208 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75811 ']' 00:17:54.208 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.208 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.208 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.208 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.208 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:54.776 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.776 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:54.776 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:55.035 NVMe0n1 00:17:55.035 08:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:55.295 00:17:55.295 08:51:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75828 00:17:55.295 08:51:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:55.295 08:51:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:56.673 08:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:56.673 08:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:59.957 08:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:00.215 00:18:00.215 08:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:00.526 08:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:03.812 08:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:03.812 [2024-11-20 08:51:34.494788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:03.812 08:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:04.748 08:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:05.006 08:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75828 00:18:11.572 { 00:18:11.572 "results": [ 00:18:11.572 { 00:18:11.572 "job": "NVMe0n1", 00:18:11.572 "core_mask": "0x1", 00:18:11.572 "workload": "verify", 00:18:11.572 "status": "finished", 00:18:11.572 "verify_range": { 00:18:11.572 "start": 0, 00:18:11.572 "length": 16384 00:18:11.572 }, 00:18:11.572 "queue_depth": 128, 00:18:11.572 "io_size": 4096, 00:18:11.572 "runtime": 15.009334, 00:18:11.572 "iops": 8873.544955425737, 00:18:11.572 "mibps": 34.66228498213179, 00:18:11.572 "io_failed": 3557, 00:18:11.572 "io_timeout": 0, 00:18:11.572 "avg_latency_us": 14016.289493509059, 00:18:11.572 "min_latency_us": 636.7418181818182, 00:18:11.572 "max_latency_us": 17515.985454545455 00:18:11.572 } 00:18:11.572 ], 00:18:11.572 "core_count": 1 00:18:11.572 } 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75811 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75811 ']' 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75811 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75811 00:18:11.572 killing process with pid 75811 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75811' 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75811 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75811 00:18:11.572 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:11.572 [2024-11-20 08:51:25.162149] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:11.572 [2024-11-20 08:51:25.162272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75811 ] 00:18:11.572 [2024-11-20 08:51:25.312682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.572 [2024-11-20 08:51:25.393437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.572 [2024-11-20 08:51:25.468375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:11.572 Running I/O for 15 seconds... 00:18:11.572 6677.00 IOPS, 26.08 MiB/s [2024-11-20T08:51:42.487Z] [2024-11-20 08:51:27.492387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.572 [2024-11-20 08:51:27.492509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.492584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.492615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.492645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.492669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.492696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.492720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.492746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.492770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.492818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.492849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.492876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.492899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.492924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.492947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.492972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.492995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.572 [2024-11-20 08:51:27.493868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.572 [2024-11-20 08:51:27.493894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.493921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.493944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.493969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.493992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.494957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.494982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.573 [2024-11-20 08:51:27.495970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.573 [2024-11-20 08:51:27.495996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.496964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.496986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.497953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.497976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.498001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.498023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.498049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.498073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.498098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.574 [2024-11-20 08:51:27.498121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.574 [2024-11-20 08:51:27.498146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:27.498169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:27.498217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:27.498264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:27.498314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:27.498364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:27.498413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:27.498464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.498533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.498593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.498642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.498689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.498735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.498784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.498856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:27.498903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.498947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.498969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.498989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.499034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.499079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.499130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.499193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.575 [2024-11-20 08:51:27.499246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x731070 is same with the state(6) to be set 00:18:11.575 [2024-11-20 08:51:27.499303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.575 [2024-11-20 08:51:27.499323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.575 [2024-11-20 08:51:27.499342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61720 len:8 PRP1 0x0 PRP2 0x0 00:18:11.575 [2024-11-20 08:51:27.499364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.575 [2024-11-20 08:51:27.499415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.575 [2024-11-20 08:51:27.499434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62616 len:8 PRP1 0x0 PRP2 0x0 00:18:11.575 [2024-11-20 08:51:27.499459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499554] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:11.575 [2024-11-20 08:51:27.499648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.575 [2024-11-20 08:51:27.499681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.575 [2024-11-20 08:51:27.499729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.575 [2024-11-20 08:51:27.499779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.575 [2024-11-20 08:51:27.499846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:27.499868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:11.575 [2024-11-20 08:51:27.499933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x696710 (9): Bad file descriptor 00:18:11.575 [2024-11-20 08:51:27.504214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:11.575 [2024-11-20 08:51:27.534573] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:11.575 7484.50 IOPS, 29.24 MiB/s [2024-11-20T08:51:42.490Z] 8040.00 IOPS, 31.41 MiB/s [2024-11-20T08:51:42.490Z] 8320.00 IOPS, 32.50 MiB/s [2024-11-20T08:51:42.490Z] [2024-11-20 08:51:31.231266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:31.231391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:31.231425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:31.231442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:31.231459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:31.231474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:31.231489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:31.231503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:31.231518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:31.231532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:31.231548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:31.231562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:31.231577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.575 [2024-11-20 08:51:31.231591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.575 [2024-11-20 08:51:31.231606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.231620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.231649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.231678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.231707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.231736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.231765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.231828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.231860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.231890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.231920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.231953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.231983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.231998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.576 [2024-11-20 08:51:31.232383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.232416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.232447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.232477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.232507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.232547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.232579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.232616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.232647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.576 [2024-11-20 08:51:31.232677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.576 [2024-11-20 08:51:31.232692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.232706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.232722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.232736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.232752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.232766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.232781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.232795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.232822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.232837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.232853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.232867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.232883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.232898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.232914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.232928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.232945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.232959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.232975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.232998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.577 [2024-11-20 08:51:31.233922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.577 [2024-11-20 08:51:31.233953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.577 [2024-11-20 08:51:31.233968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.233982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.233998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.578 [2024-11-20 08:51:31.234458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.578 [2024-11-20 08:51:31.234488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.578 [2024-11-20 08:51:31.234518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.578 [2024-11-20 08:51:31.234547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.578 [2024-11-20 08:51:31.234576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.578 [2024-11-20 08:51:31.234612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.578 [2024-11-20 08:51:31.234643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.578 [2024-11-20 08:51:31.234672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.234975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.234989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.235012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.235027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.235043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.235058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.235073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.235087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.235103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.235117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.235132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-20 08:51:31.235146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.235161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7351d0 is same with the state(6) to be set 00:18:11.578 [2024-11-20 08:51:31.235178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.578 [2024-11-20 08:51:31.235190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.578 [2024-11-20 08:51:31.235201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78568 len:8 PRP1 0x0 PRP2 0x0 00:18:11.578 [2024-11-20 08:51:31.235214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.578 [2024-11-20 08:51:31.235229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.579 [2024-11-20 08:51:31.235246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.579 [2024-11-20 08:51:31.235257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:18:11.579 [2024-11-20 08:51:31.235271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.579 [2024-11-20 08:51:31.235294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.579 [2024-11-20 08:51:31.235305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:18:11.579 [2024-11-20 08:51:31.235319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.579 [2024-11-20 08:51:31.235343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.579 [2024-11-20 08:51:31.235353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:18:11.579 [2024-11-20 08:51:31.235367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.579 [2024-11-20 08:51:31.235391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.579 [2024-11-20 08:51:31.235409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:18:11.579 [2024-11-20 08:51:31.235423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.579 [2024-11-20 08:51:31.235448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.579 [2024-11-20 08:51:31.235458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:18:11.579 [2024-11-20 08:51:31.235472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.579 [2024-11-20 08:51:31.235496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.579 [2024-11-20 08:51:31.235507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:18:11.579 [2024-11-20 08:51:31.235520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.579 [2024-11-20 08:51:31.235544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.579 [2024-11-20 08:51:31.235554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:18:11.579 [2024-11-20 08:51:31.235567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.579 [2024-11-20 08:51:31.235591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.579 [2024-11-20 08:51:31.235601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:18:11.579 [2024-11-20 08:51:31.235615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235688] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:11.579 [2024-11-20 08:51:31.235772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.579 [2024-11-20 08:51:31.235795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.579 [2024-11-20 08:51:31.235847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.579 [2024-11-20 08:51:31.235875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.579 [2024-11-20 08:51:31.235903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:31.235924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:11.579 [2024-11-20 08:51:31.235997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x696710 (9): Bad file descriptor 00:18:11.579 [2024-11-20 08:51:31.239967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:11.579 [2024-11-20 08:51:31.268788] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:11.579 8406.00 IOPS, 32.84 MiB/s [2024-11-20T08:51:42.494Z] 8541.00 IOPS, 33.36 MiB/s [2024-11-20T08:51:42.494Z] 8636.29 IOPS, 33.74 MiB/s [2024-11-20T08:51:42.494Z] 8709.75 IOPS, 34.02 MiB/s [2024-11-20T08:51:42.494Z] 8765.11 IOPS, 34.24 MiB/s [2024-11-20T08:51:42.494Z] [2024-11-20 08:51:35.797123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.579 [2024-11-20 08:51:35.797209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.579 [2024-11-20 08:51:35.797274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.579 [2024-11-20 08:51:35.797307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.579 [2024-11-20 08:51:35.797337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.579 [2024-11-20 08:51:35.797368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.579 [2024-11-20 08:51:35.797400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.579 [2024-11-20 08:51:35.797431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.579 [2024-11-20 08:51:35.797461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-20 08:51:35.797493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-20 08:51:35.797523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-20 08:51:35.797554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-20 08:51:35.797621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-20 08:51:35.797651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-20 08:51:35.797682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-20 08:51:35.797713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-20 08:51:35.797743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-20 08:51:35.797773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-20 08:51:35.797806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.579 [2024-11-20 08:51:35.797836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-20 08:51:35.797852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.797869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.797883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.797899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.797913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.797929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.797945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.797961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.797975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.797991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-11-20 08:51:35.798780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.798973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.798987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.799003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.799018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.799033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.799048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.799065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.799080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.580 [2024-11-20 08:51:35.799096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-20 08:51:35.799110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.799785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.799983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.799998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.800024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.800042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.581 [2024-11-20 08:51:35.800057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.800073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.800087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.800104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.800118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.800134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.800148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.800164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.800179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.800194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.800209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.800225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-11-20 08:51:35.800239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.581 [2024-11-20 08:51:35.800254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.582 [2024-11-20 08:51:35.800269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.582 [2024-11-20 08:51:35.800299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.582 [2024-11-20 08:51:35.800781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742510 is same with the state(6) to be set 00:18:11.582 [2024-11-20 08:51:35.800836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.800856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.800868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27736 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.800883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.800909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.800919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28128 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.800933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.800957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.800968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28136 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.800982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.800996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.801006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.801017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28144 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.801030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.801044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.801054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.801065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28152 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.801078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.801092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.801102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.801113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28160 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.801126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.801140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.801150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.801160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28168 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.801174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.801187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.801198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.801208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28176 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.801229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.801287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.801299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.801310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28184 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.801324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.801338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.801348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.801359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28192 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.801372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.801386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.801396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.801407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28200 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.801421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.801435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.801445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.801456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28208 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.801469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.801483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.801493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.582 [2024-11-20 08:51:35.801503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28216 len:8 PRP1 0x0 PRP2 0x0 00:18:11.582 [2024-11-20 08:51:35.801517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.582 [2024-11-20 08:51:35.801530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.582 [2024-11-20 08:51:35.801540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.583 [2024-11-20 08:51:35.801551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28224 len:8 PRP1 0x0 PRP2 0x0 00:18:11.583 [2024-11-20 08:51:35.801564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.583 [2024-11-20 08:51:35.801578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.583 [2024-11-20 08:51:35.801588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.583 [2024-11-20 08:51:35.801598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28232 len:8 PRP1 0x0 PRP2 0x0 00:18:11.583 [2024-11-20 08:51:35.801612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.583 [2024-11-20 08:51:35.801626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.583 [2024-11-20 08:51:35.801635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.583 [2024-11-20 08:51:35.801646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28240 len:8 PRP1 0x0 PRP2 0x0 00:18:11.583 [2024-11-20 08:51:35.801672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.583 [2024-11-20 08:51:35.801687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:11.583 [2024-11-20 08:51:35.801698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:11.583 [2024-11-20 08:51:35.801709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28248 len:8 PRP1 0x0 PRP2 0x0 00:18:11.583 [2024-11-20 08:51:35.801733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.583 [2024-11-20 08:51:35.801829] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:11.583 [2024-11-20 08:51:35.801894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.583 [2024-11-20 08:51:35.801916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.583 [2024-11-20 08:51:35.801933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.583 [2024-11-20 08:51:35.801947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.583 [2024-11-20 08:51:35.801961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.583 [2024-11-20 08:51:35.801976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.583 [2024-11-20 08:51:35.801991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.583 [2024-11-20 08:51:35.802005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.583 [2024-11-20 08:51:35.802020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:11.583 [2024-11-20 08:51:35.805847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:11.583 [2024-11-20 08:51:35.805889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x696710 (9): Bad file descriptor 00:18:11.583 [2024-11-20 08:51:35.837366] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:11.583 8765.30 IOPS, 34.24 MiB/s [2024-11-20T08:51:42.498Z] 8806.27 IOPS, 34.40 MiB/s [2024-11-20T08:51:42.498Z] 8839.08 IOPS, 34.53 MiB/s [2024-11-20T08:51:42.498Z] 8865.62 IOPS, 34.63 MiB/s [2024-11-20T08:51:42.498Z] 8859.57 IOPS, 34.61 MiB/s [2024-11-20T08:51:42.498Z] 8872.67 IOPS, 34.66 MiB/s 00:18:11.583 Latency(us) 00:18:11.583 [2024-11-20T08:51:42.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.583 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:11.583 Verification LBA range: start 0x0 length 0x4000 00:18:11.583 NVMe0n1 : 15.01 8873.54 34.66 236.99 0.00 14016.29 636.74 17515.99 00:18:11.583 [2024-11-20T08:51:42.498Z] =================================================================================================================== 00:18:11.583 [2024-11-20T08:51:42.498Z] Total : 8873.54 34.66 236.99 0.00 14016.29 636.74 17515.99 00:18:11.583 Received shutdown signal, test time was about 15.000000 seconds 00:18:11.583 00:18:11.583 Latency(us) 00:18:11.583 [2024-11-20T08:51:42.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.583 [2024-11-20T08:51:42.498Z] =================================================================================================================== 00:18:11.583 [2024-11-20T08:51:42.498Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:11.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76006 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76006 /var/tmp/bdevperf.sock 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 76006 ']' 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.583 08:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:11.899 08:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.899 08:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:11.899 08:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:12.159 [2024-11-20 08:51:42.890075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:12.159 08:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:12.417 [2024-11-20 08:51:43.190304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:12.417 08:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:12.676 NVMe0n1 00:18:12.676 08:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:12.934 00:18:13.192 08:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:13.451 00:18:13.451 08:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:13.451 08:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:13.710 08:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:13.969 08:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:17.258 08:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:17.258 08:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:17.258 08:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76083 00:18:17.258 08:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:17.258 08:51:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 76083 00:18:18.634 { 00:18:18.634 "results": [ 00:18:18.634 { 00:18:18.634 "job": "NVMe0n1", 00:18:18.634 "core_mask": "0x1", 00:18:18.634 "workload": "verify", 00:18:18.634 "status": "finished", 00:18:18.634 "verify_range": { 00:18:18.634 "start": 0, 00:18:18.634 "length": 16384 00:18:18.634 }, 00:18:18.634 "queue_depth": 128, 00:18:18.634 "io_size": 4096, 00:18:18.634 "runtime": 1.010681, 00:18:18.634 "iops": 8375.540848200371, 00:18:18.634 "mibps": 32.7169564382827, 00:18:18.634 "io_failed": 0, 00:18:18.634 "io_timeout": 0, 00:18:18.634 "avg_latency_us": 15181.736572625246, 00:18:18.634 "min_latency_us": 1720.32, 00:18:18.634 "max_latency_us": 15490.327272727272 00:18:18.634 } 00:18:18.634 ], 00:18:18.634 "core_count": 1 00:18:18.634 } 00:18:18.634 08:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:18.634 [2024-11-20 08:51:41.670291] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:18.634 [2024-11-20 08:51:41.670465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76006 ] 00:18:18.634 [2024-11-20 08:51:41.821190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.634 [2024-11-20 08:51:41.881505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.634 [2024-11-20 08:51:41.936302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:18.634 [2024-11-20 08:51:44.719394] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:18.634 [2024-11-20 08:51:44.719558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.634 [2024-11-20 08:51:44.719583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.634 [2024-11-20 08:51:44.719601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.634 [2024-11-20 08:51:44.719614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.634 [2024-11-20 08:51:44.719628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.635 [2024-11-20 08:51:44.719641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.635 [2024-11-20 08:51:44.719654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.635 [2024-11-20 08:51:44.719668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.635 [2024-11-20 08:51:44.719681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:18.635 [2024-11-20 08:51:44.719734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:18.635 [2024-11-20 08:51:44.719766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa7710 (9): Bad file descriptor 00:18:18.635 [2024-11-20 08:51:44.731370] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:18.635 Running I/O for 1 seconds... 00:18:18.635 8321.00 IOPS, 32.50 MiB/s 00:18:18.635 Latency(us) 00:18:18.635 [2024-11-20T08:51:49.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.635 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:18.635 Verification LBA range: start 0x0 length 0x4000 00:18:18.635 NVMe0n1 : 1.01 8375.54 32.72 0.00 0.00 15181.74 1720.32 15490.33 00:18:18.635 [2024-11-20T08:51:49.550Z] =================================================================================================================== 00:18:18.635 [2024-11-20T08:51:49.550Z] Total : 8375.54 32.72 0.00 0.00 15181.74 1720.32 15490.33 00:18:18.635 08:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:18.635 08:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:18.635 08:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:19.202 08:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:19.202 08:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:19.202 08:51:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:19.460 08:51:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:22.741 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:22.741 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:23.000 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 76006 00:18:23.000 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 76006 ']' 00:18:23.000 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 76006 00:18:23.000 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:23.000 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.000 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76006 00:18:23.000 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.000 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.000 killing process with pid 76006 00:18:23.000 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76006' 00:18:23.000 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 76006 00:18:23.000 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 76006 00:18:23.259 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:23.259 08:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:23.520 rmmod nvme_tcp 00:18:23.520 rmmod nvme_fabrics 00:18:23.520 rmmod nvme_keyring 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75743 ']' 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75743 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75743 ']' 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75743 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75743 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:23.520 killing process with pid 75743 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75743' 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75743 00:18:23.520 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75743 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:23.787 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:18:24.046 00:18:24.046 real 0m33.549s 00:18:24.046 user 2m8.875s 00:18:24.046 sys 0m6.058s 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.046 ************************************ 00:18:24.046 END TEST nvmf_failover 00:18:24.046 ************************************ 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.046 ************************************ 00:18:24.046 START TEST nvmf_host_discovery 00:18:24.046 ************************************ 00:18:24.046 08:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:24.306 * Looking for test storage... 00:18:24.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:24.306 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:24.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.307 --rc genhtml_branch_coverage=1 00:18:24.307 --rc genhtml_function_coverage=1 00:18:24.307 --rc genhtml_legend=1 00:18:24.307 --rc geninfo_all_blocks=1 00:18:24.307 --rc geninfo_unexecuted_blocks=1 00:18:24.307 00:18:24.307 ' 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:24.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.307 --rc genhtml_branch_coverage=1 00:18:24.307 --rc genhtml_function_coverage=1 00:18:24.307 --rc genhtml_legend=1 00:18:24.307 --rc geninfo_all_blocks=1 00:18:24.307 --rc geninfo_unexecuted_blocks=1 00:18:24.307 00:18:24.307 ' 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:24.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.307 --rc genhtml_branch_coverage=1 00:18:24.307 --rc genhtml_function_coverage=1 00:18:24.307 --rc genhtml_legend=1 00:18:24.307 --rc geninfo_all_blocks=1 00:18:24.307 --rc geninfo_unexecuted_blocks=1 00:18:24.307 00:18:24.307 ' 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:24.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.307 --rc genhtml_branch_coverage=1 00:18:24.307 --rc genhtml_function_coverage=1 00:18:24.307 --rc genhtml_legend=1 00:18:24.307 --rc geninfo_all_blocks=1 00:18:24.307 --rc geninfo_unexecuted_blocks=1 00:18:24.307 00:18:24.307 ' 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.307 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.308 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:24.308 Cannot find device "nvmf_init_br" 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:24.308 Cannot find device "nvmf_init_br2" 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:24.308 Cannot find device "nvmf_tgt_br" 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.308 Cannot find device "nvmf_tgt_br2" 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:24.308 Cannot find device "nvmf_init_br" 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:24.308 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:24.308 Cannot find device "nvmf_init_br2" 00:18:24.567 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:24.568 Cannot find device "nvmf_tgt_br" 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:24.568 Cannot find device "nvmf_tgt_br2" 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:24.568 Cannot find device "nvmf_br" 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:24.568 Cannot find device "nvmf_init_if" 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:24.568 Cannot find device "nvmf_init_if2" 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:24.568 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:24.828 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:24.828 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:18:24.828 00:18:24.828 --- 10.0.0.3 ping statistics --- 00:18:24.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.828 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:24.828 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:24.828 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:18:24.828 00:18:24.828 --- 10.0.0.4 ping statistics --- 00:18:24.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.828 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:24.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:24.828 00:18:24.828 --- 10.0.0.1 ping statistics --- 00:18:24.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.828 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:24.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:18:24.828 00:18:24.828 --- 10.0.0.2 ping statistics --- 00:18:24.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.828 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76411 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76411 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76411 ']' 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.828 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.828 [2024-11-20 08:51:55.594948] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:24.829 [2024-11-20 08:51:55.595055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.088 [2024-11-20 08:51:55.745039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.088 [2024-11-20 08:51:55.827724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.088 [2024-11-20 08:51:55.827837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.088 [2024-11-20 08:51:55.827852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.088 [2024-11-20 08:51:55.827863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.088 [2024-11-20 08:51:55.827872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.088 [2024-11-20 08:51:55.828410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.088 [2024-11-20 08:51:55.905410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:25.088 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.088 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:25.088 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:25.088 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.088 08:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.348 [2024-11-20 08:51:56.039297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.348 [2024-11-20 08:51:56.047478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.348 null0 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.348 null1 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76436 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76436 /tmp/host.sock 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76436 ']' 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:25.348 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.348 08:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.348 [2024-11-20 08:51:56.139246] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:25.348 [2024-11-20 08:51:56.139363] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76436 ] 00:18:25.607 [2024-11-20 08:51:56.293275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.607 [2024-11-20 08:51:56.381046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.607 [2024-11-20 08:51:56.460936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.543 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.544 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.804 [2024-11-20 08:51:57.571782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:26.804 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.063 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:27.063 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:27.063 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:27.063 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:27.063 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:27.063 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.063 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:27.063 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.063 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:18:27.064 08:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:27.323 [2024-11-20 08:51:58.232593] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:27.323 [2024-11-20 08:51:58.232660] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:27.323 [2024-11-20 08:51:58.232687] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:27.583 [2024-11-20 08:51:58.238631] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:27.583 [2024-11-20 08:51:58.293072] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:27.583 [2024-11-20 08:51:58.294308] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1794e60:1 started. 00:18:27.583 [2024-11-20 08:51:58.296452] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:27.583 [2024-11-20 08:51:58.296480] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:27.583 [2024-11-20 08:51:58.301257] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1794e60 was disconnected and freed. delete nvme_qpair. 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:28.152 08:51:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.152 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:28.446 [2024-11-20 08:51:59.075503] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x17a3000:1 started. 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:28.446 [2024-11-20 08:51:59.082315] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x17a3000 was disconnected and freed. delete nvme_qpair. 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:28.446 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.447 [2024-11-20 08:51:59.185965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:28.447 [2024-11-20 08:51:59.187060] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:28.447 [2024-11-20 08:51:59.187101] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:28.447 [2024-11-20 08:51:59.193070] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:28.447 [2024-11-20 08:51:59.256583] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:18:28.447 [2024-11-20 08:51:59.256657] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:28.447 [2024-11-20 08:51:59.256670] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:28.447 [2024-11-20 08:51:59.256676] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:28.447 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.736 [2024-11-20 08:51:59.402421] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:28.736 [2024-11-20 08:51:59.402486] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:28.736 [2024-11-20 08:51:59.403543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.736 [2024-11-20 08:51:59.403595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.736 [2024-11-20 08:51:59.403625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.736 [2024-11-20 08:51:59.403634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.736 [2024-11-20 08:51:59.403645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.736 [2024-11-20 08:51:59.403654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.736 [2024-11-20 08:51:59.403664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.736 [2024-11-20 08:51:59.403674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.736 [2024-11-20 08:51:59.403683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771230 is same with the state(6) to be set 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:28.736 [2024-11-20 08:51:59.408447] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:18:28.736 [2024-11-20 08:51:59.408472] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:28.736 [2024-11-20 08:51:59.408528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1771230 (9): Bad file descriptor 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:28.736 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:28.737 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.996 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:28.996 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.996 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:28.996 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.997 08:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.935 [2024-11-20 08:52:00.773502] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:29.935 [2024-11-20 08:52:00.773546] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:29.935 [2024-11-20 08:52:00.773583] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:29.935 [2024-11-20 08:52:00.779547] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:18:29.935 [2024-11-20 08:52:00.837939] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:18:29.935 [2024-11-20 08:52:00.838901] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1794470:1 started. 00:18:29.935 [2024-11-20 08:52:00.841469] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:29.935 [2024-11-20 08:52:00.841531] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:29.935 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.935 [2024-11-20 08:52:00.843332] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1794470 was disconnected and freed. delete nvme_qpair. 00:18:29.935 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:29.935 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:29.935 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:29.935 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:29.935 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.935 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:29.935 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.935 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:29.935 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.935 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.194 request: 00:18:30.194 { 00:18:30.194 "name": "nvme", 00:18:30.194 "trtype": "tcp", 00:18:30.194 "traddr": "10.0.0.3", 00:18:30.194 "adrfam": "ipv4", 00:18:30.194 "trsvcid": "8009", 00:18:30.194 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:30.194 "wait_for_attach": true, 00:18:30.194 "method": "bdev_nvme_start_discovery", 00:18:30.194 "req_id": 1 00:18:30.194 } 00:18:30.194 Got JSON-RPC error response 00:18:30.194 response: 00:18:30.194 { 00:18:30.194 "code": -17, 00:18:30.194 "message": "File exists" 00:18:30.194 } 00:18:30.194 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:30.194 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.195 request: 00:18:30.195 { 00:18:30.195 "name": "nvme_second", 00:18:30.195 "trtype": "tcp", 00:18:30.195 "traddr": "10.0.0.3", 00:18:30.195 "adrfam": "ipv4", 00:18:30.195 "trsvcid": "8009", 00:18:30.195 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:30.195 "wait_for_attach": true, 00:18:30.195 "method": "bdev_nvme_start_discovery", 00:18:30.195 "req_id": 1 00:18:30.195 } 00:18:30.195 Got JSON-RPC error response 00:18:30.195 response: 00:18:30.195 { 00:18:30.195 "code": -17, 00:18:30.195 "message": "File exists" 00:18:30.195 } 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:30.195 08:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.195 08:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.571 [2024-11-20 08:52:02.109989] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.571 [2024-11-20 08:52:02.110090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1796370 with addr=10.0.0.3, port=8010 00:18:31.571 [2024-11-20 08:52:02.110120] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:31.571 [2024-11-20 08:52:02.110131] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:31.571 [2024-11-20 08:52:02.110142] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:32.507 [2024-11-20 08:52:03.109969] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.507 [2024-11-20 08:52:03.110060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1796370 with addr=10.0.0.3, port=8010 00:18:32.507 [2024-11-20 08:52:03.110089] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:32.507 [2024-11-20 08:52:03.110100] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:32.507 [2024-11-20 08:52:03.110109] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:33.444 [2024-11-20 08:52:04.109788] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:18:33.444 request: 00:18:33.444 { 00:18:33.444 "name": "nvme_second", 00:18:33.444 "trtype": "tcp", 00:18:33.444 "traddr": "10.0.0.3", 00:18:33.444 "adrfam": "ipv4", 00:18:33.444 "trsvcid": "8010", 00:18:33.444 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:33.444 "wait_for_attach": false, 00:18:33.444 "attach_timeout_ms": 3000, 00:18:33.444 "method": "bdev_nvme_start_discovery", 00:18:33.444 "req_id": 1 00:18:33.444 } 00:18:33.444 Got JSON-RPC error response 00:18:33.444 response: 00:18:33.444 { 00:18:33.444 "code": -110, 00:18:33.444 "message": "Connection timed out" 00:18:33.444 } 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76436 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:33.444 rmmod nvme_tcp 00:18:33.444 rmmod nvme_fabrics 00:18:33.444 rmmod nvme_keyring 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76411 ']' 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76411 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76411 ']' 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76411 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76411 00:18:33.444 killing process with pid 76411 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76411' 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76411 00:18:33.444 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76411 00:18:33.716 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:33.716 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:33.716 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:33.716 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:18:33.716 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:18:33.716 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:33.716 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:18:33.716 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:33.716 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:33.716 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:33.716 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:33.717 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:33.717 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:33.717 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:33.717 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:33.717 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:33.717 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:33.717 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:18:33.975 00:18:33.975 real 0m9.818s 00:18:33.975 user 0m18.819s 00:18:33.975 sys 0m2.111s 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.975 ************************************ 00:18:33.975 END TEST nvmf_host_discovery 00:18:33.975 ************************************ 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.975 ************************************ 00:18:33.975 START TEST nvmf_host_multipath_status 00:18:33.975 ************************************ 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:33.975 * Looking for test storage... 00:18:33.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:33.975 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:34.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.235 --rc genhtml_branch_coverage=1 00:18:34.235 --rc genhtml_function_coverage=1 00:18:34.235 --rc genhtml_legend=1 00:18:34.235 --rc geninfo_all_blocks=1 00:18:34.235 --rc geninfo_unexecuted_blocks=1 00:18:34.235 00:18:34.235 ' 00:18:34.235 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:34.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.235 --rc genhtml_branch_coverage=1 00:18:34.235 --rc genhtml_function_coverage=1 00:18:34.235 --rc genhtml_legend=1 00:18:34.235 --rc geninfo_all_blocks=1 00:18:34.235 --rc geninfo_unexecuted_blocks=1 00:18:34.235 00:18:34.235 ' 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.236 --rc genhtml_branch_coverage=1 00:18:34.236 --rc genhtml_function_coverage=1 00:18:34.236 --rc genhtml_legend=1 00:18:34.236 --rc geninfo_all_blocks=1 00:18:34.236 --rc geninfo_unexecuted_blocks=1 00:18:34.236 00:18:34.236 ' 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.236 --rc genhtml_branch_coverage=1 00:18:34.236 --rc genhtml_function_coverage=1 00:18:34.236 --rc genhtml_legend=1 00:18:34.236 --rc geninfo_all_blocks=1 00:18:34.236 --rc geninfo_unexecuted_blocks=1 00:18:34.236 00:18:34.236 ' 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.236 08:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.236 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:34.236 Cannot find device "nvmf_init_br" 00:18:34.236 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:34.237 Cannot find device "nvmf_init_br2" 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:34.237 Cannot find device "nvmf_tgt_br" 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.237 Cannot find device "nvmf_tgt_br2" 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:34.237 Cannot find device "nvmf_init_br" 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:34.237 Cannot find device "nvmf_init_br2" 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:34.237 Cannot find device "nvmf_tgt_br" 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:34.237 Cannot find device "nvmf_tgt_br2" 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:34.237 Cannot find device "nvmf_br" 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:34.237 Cannot find device "nvmf_init_if" 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:34.237 Cannot find device "nvmf_init_if2" 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:18:34.237 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:34.496 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.496 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:18:34.496 00:18:34.496 --- 10.0.0.3 ping statistics --- 00:18:34.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.496 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:34.496 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:34.496 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:18:34.496 00:18:34.496 --- 10.0.0.4 ping statistics --- 00:18:34.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.496 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:18:34.496 00:18:34.496 --- 10.0.0.1 ping statistics --- 00:18:34.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.496 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:34.496 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:34.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:18:34.497 00:18:34.497 --- 10.0.0.2 ping statistics --- 00:18:34.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.497 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76947 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76947 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76947 ']' 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.497 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:34.755 [2024-11-20 08:52:05.453625] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:34.755 [2024-11-20 08:52:05.453753] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.755 [2024-11-20 08:52:05.605115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:35.014 [2024-11-20 08:52:05.682776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.014 [2024-11-20 08:52:05.682863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.014 [2024-11-20 08:52:05.682890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.014 [2024-11-20 08:52:05.682898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.014 [2024-11-20 08:52:05.682905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.014 [2024-11-20 08:52:05.684304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.014 [2024-11-20 08:52:05.684314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.014 [2024-11-20 08:52:05.754792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.014 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.014 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:35.014 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:35.014 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.014 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:35.014 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.014 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76947 00:18:35.014 08:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:35.580 [2024-11-20 08:52:06.189539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.580 08:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:35.839 Malloc0 00:18:35.839 08:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:36.097 08:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:36.356 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:36.614 [2024-11-20 08:52:07.391523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:36.614 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:36.873 [2024-11-20 08:52:07.707705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:36.873 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76995 00:18:36.873 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:36.873 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.873 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76995 /var/tmp/bdevperf.sock 00:18:36.873 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76995 ']' 00:18:36.873 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.873 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.873 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.873 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.873 08:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:38.250 08:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.250 08:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:38.250 08:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:38.250 08:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:38.508 Nvme0n1 00:18:38.508 08:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:39.087 Nvme0n1 00:18:39.087 08:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:39.087 08:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:41.051 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:41.051 08:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:41.310 08:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:41.568 08:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:42.505 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:42.505 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:42.505 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:42.505 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:42.762 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:42.762 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:42.762 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:42.762 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:43.020 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:43.020 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:43.020 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:43.020 08:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.278 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.278 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:43.278 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.278 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:43.844 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.844 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:43.844 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.844 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:44.102 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.102 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:44.102 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.102 08:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:44.420 08:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.420 08:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:44.420 08:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:44.677 08:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:44.935 08:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:45.871 08:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:45.871 08:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:45.871 08:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:45.871 08:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:46.129 08:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:46.129 08:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:46.129 08:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.129 08:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:46.388 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.388 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:46.388 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.388 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:46.647 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.647 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:46.647 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:46.647 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.906 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.906 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:46.906 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.906 08:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:47.164 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.164 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:47.164 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.164 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:47.423 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.423 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:47.423 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:47.682 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:47.941 08:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:49.316 08:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:49.316 08:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:49.316 08:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:49.316 08:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.316 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.316 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:49.316 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:49.316 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.574 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:49.574 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:49.574 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.574 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:49.831 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.831 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:49.832 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.832 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:50.090 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.090 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:50.090 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.090 08:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:50.348 08:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.348 08:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:50.348 08:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.348 08:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:50.606 08:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.606 08:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:50.606 08:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:50.865 08:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:51.123 08:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:52.501 08:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:52.501 08:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:52.501 08:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.501 08:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:52.501 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.501 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:52.501 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.501 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:52.760 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:52.760 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:52.760 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.760 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:53.018 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.018 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:53.018 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.018 08:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:53.277 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.277 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:53.277 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:53.277 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.545 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.545 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:53.545 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.545 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:53.853 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:53.853 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:53.853 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:54.112 08:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:54.371 08:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:55.307 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:55.307 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:55.307 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.307 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:55.877 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:55.877 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:55.877 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.877 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:56.136 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:56.136 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:56.136 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.136 08:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:56.394 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.394 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:56.394 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.394 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:56.654 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.654 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:56.654 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.654 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:56.913 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:56.913 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:56.913 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.913 08:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:57.172 08:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:57.172 08:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:57.172 08:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:57.431 08:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:57.690 08:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:58.675 08:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:58.675 08:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:58.675 08:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.675 08:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:58.934 08:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:58.934 08:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:58.934 08:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.934 08:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:59.501 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.501 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:59.501 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.501 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:59.760 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.760 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:59.760 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.760 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:00.020 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.020 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:00.020 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.020 08:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:00.278 08:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:00.278 08:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:00.278 08:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.278 08:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:00.536 08:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.536 08:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:00.795 08:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:00.795 08:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:01.054 08:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:01.312 08:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:02.714 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:02.714 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:02.714 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.714 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:02.714 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.714 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:02.714 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.714 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:02.974 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.974 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:02.974 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.974 08:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:03.233 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.233 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:03.233 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.233 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:03.494 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.494 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:03.494 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.494 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:04.063 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.063 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:04.063 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:04.063 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.063 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.063 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:04.063 08:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:04.322 08:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:04.582 08:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:05.960 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:05.960 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:05.960 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.960 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:05.960 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:05.960 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:05.960 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.960 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:06.219 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.219 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:06.219 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:06.219 08:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.477 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.477 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:06.477 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.477 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:06.736 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.736 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:06.736 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.736 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:06.995 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.995 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:06.995 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.995 08:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:07.254 08:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.254 08:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:07.254 08:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:07.513 08:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:07.772 08:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:09.150 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:09.150 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:09.150 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.150 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:09.150 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.150 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:09.150 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.150 08:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:09.410 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.410 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:09.410 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.410 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:09.670 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.670 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:09.670 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.670 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:10.238 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.238 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:10.238 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.238 08:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:10.238 08:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.238 08:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:10.238 08:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.239 08:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:10.497 08:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.497 08:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:10.497 08:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:10.756 08:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:11.324 08:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:12.280 08:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:12.280 08:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:12.280 08:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.280 08:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:12.539 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.539 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:12.539 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.539 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:12.798 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:12.798 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:12.798 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.798 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:13.056 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.056 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:13.056 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:13.056 08:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.316 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.316 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:13.316 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.316 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:13.576 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.576 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:13.576 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.576 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:13.834 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:13.834 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76995 00:19:13.834 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76995 ']' 00:19:13.834 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76995 00:19:13.834 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:13.834 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.835 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76995 00:19:14.095 killing process with pid 76995 00:19:14.095 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:14.095 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:14.095 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76995' 00:19:14.095 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76995 00:19:14.095 08:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76995 00:19:14.095 { 00:19:14.095 "results": [ 00:19:14.095 { 00:19:14.095 "job": "Nvme0n1", 00:19:14.095 "core_mask": "0x4", 00:19:14.095 "workload": "verify", 00:19:14.095 "status": "terminated", 00:19:14.095 "verify_range": { 00:19:14.095 "start": 0, 00:19:14.095 "length": 16384 00:19:14.095 }, 00:19:14.095 "queue_depth": 128, 00:19:14.095 "io_size": 4096, 00:19:14.095 "runtime": 34.878306, 00:19:14.095 "iops": 8261.066348807193, 00:19:14.095 "mibps": 32.2697904250281, 00:19:14.095 "io_failed": 0, 00:19:14.095 "io_timeout": 0, 00:19:14.095 "avg_latency_us": 15462.114773153213, 00:19:14.095 "min_latency_us": 121.01818181818182, 00:19:14.095 "max_latency_us": 4026531.84 00:19:14.095 } 00:19:14.095 ], 00:19:14.095 "core_count": 1 00:19:14.095 } 00:19:14.358 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76995 00:19:14.358 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:14.358 [2024-11-20 08:52:07.779207] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:19:14.358 [2024-11-20 08:52:07.779371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76995 ] 00:19:14.358 [2024-11-20 08:52:07.929787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.358 [2024-11-20 08:52:08.004186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.358 [2024-11-20 08:52:08.079055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:14.358 Running I/O for 90 seconds... 00:19:14.358 7060.00 IOPS, 27.58 MiB/s [2024-11-20T08:52:45.273Z] 6986.50 IOPS, 27.29 MiB/s [2024-11-20T08:52:45.273Z] 7004.33 IOPS, 27.36 MiB/s [2024-11-20T08:52:45.273Z] 7013.25 IOPS, 27.40 MiB/s [2024-11-20T08:52:45.273Z] 7018.40 IOPS, 27.42 MiB/s [2024-11-20T08:52:45.273Z] 6967.50 IOPS, 27.22 MiB/s [2024-11-20T08:52:45.273Z] 7096.71 IOPS, 27.72 MiB/s [2024-11-20T08:52:45.273Z] 7240.62 IOPS, 28.28 MiB/s [2024-11-20T08:52:45.273Z] 7338.22 IOPS, 28.66 MiB/s [2024-11-20T08:52:45.273Z] 7525.20 IOPS, 29.40 MiB/s [2024-11-20T08:52:45.273Z] 7697.82 IOPS, 30.07 MiB/s [2024-11-20T08:52:45.273Z] 7839.00 IOPS, 30.62 MiB/s [2024-11-20T08:52:45.273Z] 7962.77 IOPS, 31.10 MiB/s [2024-11-20T08:52:45.273Z] 8067.71 IOPS, 31.51 MiB/s [2024-11-20T08:52:45.273Z] [2024-11-20 08:52:24.872358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.872461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.872551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.872575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.872599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.872617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.872639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.872657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.872679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.872696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.872719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.872736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.872766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.872783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.872818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.872839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.872862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.872879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.872940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.872959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.872981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.872997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.873035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.873074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.873112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.873152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.873220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.358 [2024-11-20 08:52:24.873259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.358 [2024-11-20 08:52:24.873299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.358 [2024-11-20 08:52:24.873338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.358 [2024-11-20 08:52:24.873377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.358 [2024-11-20 08:52:24.873416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.358 [2024-11-20 08:52:24.873471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.358 [2024-11-20 08:52:24.873510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.358 [2024-11-20 08:52:24.873551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.358 [2024-11-20 08:52:24.873590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.358 [2024-11-20 08:52:24.873611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.873634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.873652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.873689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.873705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.873727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.873743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.873764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.873781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.873802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.873818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.873866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.873886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.873909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.873926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.873947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.873964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.873989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.874293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.874346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.874384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.874421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.874460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.874505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.874545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.359 [2024-11-20 08:52:24.874583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.874963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.874985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.875008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.875041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.875059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.875081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.875098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.875120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.875137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.875160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.875177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.875199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.359 [2024-11-20 08:52:24.875215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.359 [2024-11-20 08:52:24.875238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.360 [2024-11-20 08:52:24.875255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.875970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.875987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.360 [2024-11-20 08:52:24.876152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.360 [2024-11-20 08:52:24.876192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.360 [2024-11-20 08:52:24.876232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.360 [2024-11-20 08:52:24.876271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.360 [2024-11-20 08:52:24.876310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.360 [2024-11-20 08:52:24.876349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.360 [2024-11-20 08:52:24.876401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.360 [2024-11-20 08:52:24.876440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.360 [2024-11-20 08:52:24.876865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.360 [2024-11-20 08:52:24.876892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.876909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.876931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.876948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.876970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.876987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.877026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.877073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.877122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.877161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.877200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.361 [2024-11-20 08:52:24.877238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.361 [2024-11-20 08:52:24.877278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.361 [2024-11-20 08:52:24.877317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.361 [2024-11-20 08:52:24.877356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.361 [2024-11-20 08:52:24.877395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.361 [2024-11-20 08:52:24.877434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.877464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.361 [2024-11-20 08:52:24.877482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.878209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.361 [2024-11-20 08:52:24.878238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.878272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.878291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.878334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.878353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.878382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.878400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.878429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.878446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.878482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.878500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.878529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.878547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.878576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.878593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.878638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.878659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.878689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.878707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:24.878736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:24.878754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.361 8144.27 IOPS, 31.81 MiB/s [2024-11-20T08:52:45.276Z] 7635.25 IOPS, 29.83 MiB/s [2024-11-20T08:52:45.276Z] 7186.12 IOPS, 28.07 MiB/s [2024-11-20T08:52:45.276Z] 6786.89 IOPS, 26.51 MiB/s [2024-11-20T08:52:45.276Z] 6433.95 IOPS, 25.13 MiB/s [2024-11-20T08:52:45.276Z] 6522.85 IOPS, 25.48 MiB/s [2024-11-20T08:52:45.276Z] 6609.43 IOPS, 25.82 MiB/s [2024-11-20T08:52:45.276Z] 6676.64 IOPS, 26.08 MiB/s [2024-11-20T08:52:45.276Z] 6940.13 IOPS, 27.11 MiB/s [2024-11-20T08:52:45.276Z] 7179.92 IOPS, 28.05 MiB/s [2024-11-20T08:52:45.276Z] 7370.72 IOPS, 28.79 MiB/s [2024-11-20T08:52:45.276Z] 7490.81 IOPS, 29.26 MiB/s [2024-11-20T08:52:45.276Z] 7558.56 IOPS, 29.53 MiB/s [2024-11-20T08:52:45.276Z] 7621.46 IOPS, 29.77 MiB/s [2024-11-20T08:52:45.276Z] 7703.79 IOPS, 30.09 MiB/s [2024-11-20T08:52:45.276Z] 7859.03 IOPS, 30.70 MiB/s [2024-11-20T08:52:45.276Z] 8016.68 IOPS, 31.32 MiB/s [2024-11-20T08:52:45.276Z] 8166.06 IOPS, 31.90 MiB/s [2024-11-20T08:52:45.276Z] [2024-11-20 08:52:41.912830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:41.912914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:41.912990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:41.913050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:41.913077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:41.913094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:41.913117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:41.913134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:41.913156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:41.913172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:41.913194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.361 [2024-11-20 08:52:41.913210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.361 [2024-11-20 08:52:41.913232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.913248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.913292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.913330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.913368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.913482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.913529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.913965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.913987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.914004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.914042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.914093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.914133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.914171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.914210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.914249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.914287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.914326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.914366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.914405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.362 [2024-11-20 08:52:41.914444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.914467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.914484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.915723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.915753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.362 [2024-11-20 08:52:41.915793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.362 [2024-11-20 08:52:41.915827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.915851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.363 [2024-11-20 08:52:41.915868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.915891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.363 [2024-11-20 08:52:41.915907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.915929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.363 [2024-11-20 08:52:41.915946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.915968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.363 [2024-11-20 08:52:41.915984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.363 [2024-11-20 08:52:41.916023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.363 [2024-11-20 08:52:41.916061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.363 [2024-11-20 08:52:41.916099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.363 [2024-11-20 08:52:41.916138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.363 [2024-11-20 08:52:41.916176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.363 [2024-11-20 08:52:41.916214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.363 [2024-11-20 08:52:41.916254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.363 [2024-11-20 08:52:41.916323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.363 [2024-11-20 08:52:41.916364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.363 [2024-11-20 08:52:41.916403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.363 [2024-11-20 08:52:41.916442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.363 [2024-11-20 08:52:41.916481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.363 [2024-11-20 08:52:41.916503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.363 [2024-11-20 08:52:41.916520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.363 8203.03 IOPS, 32.04 MiB/s [2024-11-20T08:52:45.278Z] 8237.06 IOPS, 32.18 MiB/s [2024-11-20T08:52:45.278Z] Received shutdown signal, test time was about 34.879162 seconds 00:19:14.363 00:19:14.363 Latency(us) 00:19:14.363 [2024-11-20T08:52:45.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.363 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.363 Verification LBA range: start 0x0 length 0x4000 00:19:14.363 Nvme0n1 : 34.88 8261.07 32.27 0.00 0.00 15462.11 121.02 4026531.84 00:19:14.363 [2024-11-20T08:52:45.278Z] =================================================================================================================== 00:19:14.363 [2024-11-20T08:52:45.278Z] Total : 8261.07 32.27 0.00 0.00 15462.11 121.02 4026531.84 00:19:14.363 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:14.622 rmmod nvme_tcp 00:19:14.622 rmmod nvme_fabrics 00:19:14.622 rmmod nvme_keyring 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76947 ']' 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76947 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76947 ']' 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76947 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76947 00:19:14.622 killing process with pid 76947 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76947' 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76947 00:19:14.622 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76947 00:19:14.882 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:14.882 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:14.882 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:14.882 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:19:14.882 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:19:14.882 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:14.882 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:19:14.882 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:14.882 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:14.882 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:14.882 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:15.140 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:15.140 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.141 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:15.141 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:15.141 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:15.141 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:15.141 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:15.141 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:15.141 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:15.141 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.141 08:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.141 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:15.141 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.141 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.141 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.141 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:19:15.141 ************************************ 00:19:15.141 END TEST nvmf_host_multipath_status 00:19:15.141 ************************************ 00:19:15.141 00:19:15.141 real 0m41.245s 00:19:15.141 user 2m13.276s 00:19:15.141 sys 0m12.187s 00:19:15.141 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.141 08:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.400 ************************************ 00:19:15.400 START TEST nvmf_discovery_remove_ifc 00:19:15.400 ************************************ 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:15.400 * Looking for test storage... 00:19:15.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:15.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.400 --rc genhtml_branch_coverage=1 00:19:15.400 --rc genhtml_function_coverage=1 00:19:15.400 --rc genhtml_legend=1 00:19:15.400 --rc geninfo_all_blocks=1 00:19:15.400 --rc geninfo_unexecuted_blocks=1 00:19:15.400 00:19:15.400 ' 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:15.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.400 --rc genhtml_branch_coverage=1 00:19:15.400 --rc genhtml_function_coverage=1 00:19:15.400 --rc genhtml_legend=1 00:19:15.400 --rc geninfo_all_blocks=1 00:19:15.400 --rc geninfo_unexecuted_blocks=1 00:19:15.400 00:19:15.400 ' 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:15.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.400 --rc genhtml_branch_coverage=1 00:19:15.400 --rc genhtml_function_coverage=1 00:19:15.400 --rc genhtml_legend=1 00:19:15.400 --rc geninfo_all_blocks=1 00:19:15.400 --rc geninfo_unexecuted_blocks=1 00:19:15.400 00:19:15.400 ' 00:19:15.400 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:15.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.400 --rc genhtml_branch_coverage=1 00:19:15.400 --rc genhtml_function_coverage=1 00:19:15.400 --rc genhtml_legend=1 00:19:15.400 --rc geninfo_all_blocks=1 00:19:15.400 --rc geninfo_unexecuted_blocks=1 00:19:15.400 00:19:15.400 ' 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.401 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.660 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:19:15.660 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:19:15.660 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.660 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.660 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.660 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.660 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.661 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:15.661 Cannot find device "nvmf_init_br" 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:15.661 Cannot find device "nvmf_init_br2" 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:15.661 Cannot find device "nvmf_tgt_br" 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.661 Cannot find device "nvmf_tgt_br2" 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:15.661 Cannot find device "nvmf_init_br" 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:15.661 Cannot find device "nvmf_init_br2" 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:15.661 Cannot find device "nvmf_tgt_br" 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:15.661 Cannot find device "nvmf_tgt_br2" 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:15.661 Cannot find device "nvmf_br" 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:15.661 Cannot find device "nvmf_init_if" 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:15.661 Cannot find device "nvmf_init_if2" 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:19:15.661 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:15.662 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:15.922 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:15.922 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:19:15.922 00:19:15.922 --- 10.0.0.3 ping statistics --- 00:19:15.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.922 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:15.922 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:15.922 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:19:15.922 00:19:15.922 --- 10.0.0.4 ping statistics --- 00:19:15.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.922 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:15.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:15.922 00:19:15.922 --- 10.0.0.1 ping statistics --- 00:19:15.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.922 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:15.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:19:15.922 00:19:15.922 --- 10.0.0.2 ping statistics --- 00:19:15.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.922 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77858 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77858 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77858 ']' 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.922 08:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:15.922 [2024-11-20 08:52:46.799194] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:19:15.922 [2024-11-20 08:52:46.799305] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.182 [2024-11-20 08:52:46.951219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.182 [2024-11-20 08:52:47.030481] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.182 [2024-11-20 08:52:47.030821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.182 [2024-11-20 08:52:47.030979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.182 [2024-11-20 08:52:47.031081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.182 [2024-11-20 08:52:47.031160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.182 [2024-11-20 08:52:47.031752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.441 [2024-11-20 08:52:47.108408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:17.009 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.009 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:17.009 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:17.009 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:17.009 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:17.009 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.009 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:17.009 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.009 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:17.009 [2024-11-20 08:52:47.919018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.268 [2024-11-20 08:52:47.927183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:17.269 null0 00:19:17.269 [2024-11-20 08:52:47.959106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:17.269 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.269 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77897 00:19:17.269 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:17.269 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77897 /tmp/host.sock 00:19:17.269 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77897 ']' 00:19:17.269 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:17.269 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.269 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:17.269 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:17.269 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.269 08:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:17.269 [2024-11-20 08:52:48.047067] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:19:17.269 [2024-11-20 08:52:48.047163] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77897 ] 00:19:17.528 [2024-11-20 08:52:48.198157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.528 [2024-11-20 08:52:48.278726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.528 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.528 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:17.528 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.528 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:17.528 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.528 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:17.529 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.529 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:17.529 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.529 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:17.529 [2024-11-20 08:52:48.409482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:17.788 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.788 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:17.788 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.788 08:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:18.725 [2024-11-20 08:52:49.482456] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:18.725 [2024-11-20 08:52:49.482522] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:18.725 [2024-11-20 08:52:49.482545] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:18.725 [2024-11-20 08:52:49.488513] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:18.725 [2024-11-20 08:52:49.543051] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:18.725 [2024-11-20 08:52:49.544489] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1568fc0:1 started. 00:19:18.725 [2024-11-20 08:52:49.546507] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:18.725 [2024-11-20 08:52:49.546574] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:18.725 [2024-11-20 08:52:49.546604] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:18.725 [2024-11-20 08:52:49.546622] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:18.725 [2024-11-20 08:52:49.546653] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:18.725 [2024-11-20 08:52:49.550976] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1568fc0 was disconnected and freed. delete nvme_qpair. 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.725 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:18.983 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.983 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:18.983 08:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:19.950 08:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:19.950 08:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.950 08:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:19.950 08:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:19.950 08:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:19.950 08:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.950 08:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:19.950 08:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.950 08:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:19.950 08:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:20.886 08:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:20.886 08:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.886 08:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:20.886 08:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:20.886 08:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.886 08:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:20.886 08:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:20.886 08:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.886 08:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:20.886 08:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:22.262 08:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:22.262 08:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:22.262 08:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:22.262 08:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.262 08:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:22.262 08:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:22.262 08:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:22.263 08:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.263 08:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:22.263 08:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:23.197 08:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:23.197 08:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:23.197 08:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:23.197 08:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.197 08:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:23.197 08:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:23.197 08:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:23.197 08:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.197 08:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:23.197 08:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:24.133 08:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:24.133 08:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:24.133 08:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.133 08:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:24.133 08:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:24.133 08:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:24.133 08:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:24.133 08:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.133 [2024-11-20 08:52:54.974766] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:24.133 [2024-11-20 08:52:54.974883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.133 [2024-11-20 08:52:54.974902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.133 [2024-11-20 08:52:54.974916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.133 [2024-11-20 08:52:54.974927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.133 [2024-11-20 08:52:54.974938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.133 [2024-11-20 08:52:54.974947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.133 [2024-11-20 08:52:54.974959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.133 [2024-11-20 08:52:54.974968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.133 [2024-11-20 08:52:54.974978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.133 [2024-11-20 08:52:54.974987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.133 [2024-11-20 08:52:54.974997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1545240 is same with the state(6) to be set 00:19:24.133 08:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:24.133 08:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:24.134 [2024-11-20 08:52:54.984761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1545240 (9): Bad file descriptor 00:19:24.134 [2024-11-20 08:52:54.994779] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:24.134 [2024-11-20 08:52:54.994831] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:24.134 [2024-11-20 08:52:54.994855] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:24.134 [2024-11-20 08:52:54.994862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:24.134 [2024-11-20 08:52:54.994921] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:25.071 08:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:25.071 08:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:25.071 08:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.071 08:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:25.071 08:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:25.072 08:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:25.072 08:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:25.330 [2024-11-20 08:52:56.008889] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:19:25.330 [2024-11-20 08:52:56.009003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545240 with addr=10.0.0.3, port=4420 00:19:25.330 [2024-11-20 08:52:56.009041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1545240 is same with the state(6) to be set 00:19:25.330 [2024-11-20 08:52:56.009126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1545240 (9): Bad file descriptor 00:19:25.330 [2024-11-20 08:52:56.010011] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:19:25.330 [2024-11-20 08:52:56.010091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:25.330 [2024-11-20 08:52:56.010116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:25.330 [2024-11-20 08:52:56.010138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:25.330 [2024-11-20 08:52:56.010158] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:25.330 [2024-11-20 08:52:56.010171] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:25.330 [2024-11-20 08:52:56.010183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:25.330 [2024-11-20 08:52:56.010204] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:25.330 [2024-11-20 08:52:56.010216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:25.330 08:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.330 08:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:25.330 08:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:26.268 [2024-11-20 08:52:57.010279] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:26.268 [2024-11-20 08:52:57.010352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:26.268 [2024-11-20 08:52:57.010381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:26.268 [2024-11-20 08:52:57.010393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:26.268 [2024-11-20 08:52:57.010404] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:19:26.268 [2024-11-20 08:52:57.010414] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:26.268 [2024-11-20 08:52:57.010421] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:26.268 [2024-11-20 08:52:57.010427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:26.268 [2024-11-20 08:52:57.010467] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:19:26.268 [2024-11-20 08:52:57.010559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.268 [2024-11-20 08:52:57.010577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.268 [2024-11-20 08:52:57.010592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.268 [2024-11-20 08:52:57.010601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.268 [2024-11-20 08:52:57.010611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.268 [2024-11-20 08:52:57.010620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.268 [2024-11-20 08:52:57.010630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.268 [2024-11-20 08:52:57.010639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.268 [2024-11-20 08:52:57.010649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.268 [2024-11-20 08:52:57.010658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.268 [2024-11-20 08:52:57.010668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:19:26.268 [2024-11-20 08:52:57.010687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d0a20 (9): Bad file descriptor 00:19:26.268 [2024-11-20 08:52:57.011367] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:26.268 [2024-11-20 08:52:57.011407] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:26.268 08:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:27.644 08:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:27.644 08:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:27.644 08:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:27.644 08:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.644 08:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.644 08:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:27.644 08:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:27.644 08:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.644 08:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:27.644 08:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:28.212 [2024-11-20 08:52:59.016929] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:28.212 [2024-11-20 08:52:59.017298] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:28.212 [2024-11-20 08:52:59.017338] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:28.212 [2024-11-20 08:52:59.022972] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:19:28.212 [2024-11-20 08:52:59.077416] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:19:28.212 [2024-11-20 08:52:59.078784] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1521f00:1 started. 00:19:28.212 [2024-11-20 08:52:59.080461] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:28.212 [2024-11-20 08:52:59.080689] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:28.212 [2024-11-20 08:52:59.080758] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:28.212 [2024-11-20 08:52:59.080942] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:19:28.212 [2024-11-20 08:52:59.081076] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:28.212 [2024-11-20 08:52:59.085763] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1521f00 was disconnected and freed. delete nvme_qpair. 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77897 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77897 ']' 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77897 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77897 00:19:28.472 killing process with pid 77897 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77897' 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77897 00:19:28.472 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77897 00:19:28.731 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:28.731 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.731 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:19:28.990 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.991 rmmod nvme_tcp 00:19:28.991 rmmod nvme_fabrics 00:19:28.991 rmmod nvme_keyring 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77858 ']' 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77858 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77858 ']' 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77858 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77858 00:19:28.991 killing process with pid 77858 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77858' 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77858 00:19:28.991 08:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77858 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:29.250 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:19:29.509 00:19:29.509 real 0m14.158s 00:19:29.509 user 0m23.707s 00:19:29.509 sys 0m2.566s 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.509 ************************************ 00:19:29.509 END TEST nvmf_discovery_remove_ifc 00:19:29.509 ************************************ 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.509 08:53:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.509 ************************************ 00:19:29.509 START TEST nvmf_identify_kernel_target 00:19:29.510 ************************************ 00:19:29.510 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:29.510 * Looking for test storage... 00:19:29.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:29.510 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:29.510 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:29.510 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:29.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.770 --rc genhtml_branch_coverage=1 00:19:29.770 --rc genhtml_function_coverage=1 00:19:29.770 --rc genhtml_legend=1 00:19:29.770 --rc geninfo_all_blocks=1 00:19:29.770 --rc geninfo_unexecuted_blocks=1 00:19:29.770 00:19:29.770 ' 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:29.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.770 --rc genhtml_branch_coverage=1 00:19:29.770 --rc genhtml_function_coverage=1 00:19:29.770 --rc genhtml_legend=1 00:19:29.770 --rc geninfo_all_blocks=1 00:19:29.770 --rc geninfo_unexecuted_blocks=1 00:19:29.770 00:19:29.770 ' 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:29.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.770 --rc genhtml_branch_coverage=1 00:19:29.770 --rc genhtml_function_coverage=1 00:19:29.770 --rc genhtml_legend=1 00:19:29.770 --rc geninfo_all_blocks=1 00:19:29.770 --rc geninfo_unexecuted_blocks=1 00:19:29.770 00:19:29.770 ' 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:29.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.770 --rc genhtml_branch_coverage=1 00:19:29.770 --rc genhtml_function_coverage=1 00:19:29.770 --rc genhtml_legend=1 00:19:29.770 --rc geninfo_all_blocks=1 00:19:29.770 --rc geninfo_unexecuted_blocks=1 00:19:29.770 00:19:29.770 ' 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.770 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.771 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:29.771 Cannot find device "nvmf_init_br" 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:29.771 Cannot find device "nvmf_init_br2" 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:29.771 Cannot find device "nvmf_tgt_br" 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:29.771 Cannot find device "nvmf_tgt_br2" 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:29.771 Cannot find device "nvmf_init_br" 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:29.771 Cannot find device "nvmf_init_br2" 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:29.771 Cannot find device "nvmf_tgt_br" 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:29.771 Cannot find device "nvmf_tgt_br2" 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:19:29.771 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:30.030 Cannot find device "nvmf_br" 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:30.031 Cannot find device "nvmf_init_if" 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:30.031 Cannot find device "nvmf_init_if2" 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:30.031 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:30.031 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.144 ms 00:19:30.031 00:19:30.031 --- 10.0.0.3 ping statistics --- 00:19:30.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.031 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:30.031 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:30.031 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:19:30.031 00:19:30.031 --- 10.0.0.4 ping statistics --- 00:19:30.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.031 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:30.031 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:30.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:30.290 00:19:30.290 --- 10.0.0.1 ping statistics --- 00:19:30.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.290 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:30.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:19:30.290 00:19:30.290 --- 10.0.0.2 ping statistics --- 00:19:30.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.290 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:30.290 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:30.291 08:53:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:30.291 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:30.291 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:30.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:30.550 Waiting for block devices as requested 00:19:30.550 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:30.810 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:30.810 No valid GPT data, bailing 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:30.810 No valid GPT data, bailing 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:30.810 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:31.119 No valid GPT data, bailing 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:31.119 No valid GPT data, bailing 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -a 10.0.0.1 -t tcp -s 4420 00:19:31.119 00:19:31.119 Discovery Log Number of Records 2, Generation counter 2 00:19:31.119 =====Discovery Log Entry 0====== 00:19:31.119 trtype: tcp 00:19:31.119 adrfam: ipv4 00:19:31.119 subtype: current discovery subsystem 00:19:31.119 treq: not specified, sq flow control disable supported 00:19:31.119 portid: 1 00:19:31.119 trsvcid: 4420 00:19:31.119 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:31.119 traddr: 10.0.0.1 00:19:31.119 eflags: none 00:19:31.119 sectype: none 00:19:31.119 =====Discovery Log Entry 1====== 00:19:31.119 trtype: tcp 00:19:31.119 adrfam: ipv4 00:19:31.119 subtype: nvme subsystem 00:19:31.119 treq: not specified, sq flow control disable supported 00:19:31.119 portid: 1 00:19:31.119 trsvcid: 4420 00:19:31.119 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:31.119 traddr: 10.0.0.1 00:19:31.119 eflags: none 00:19:31.119 sectype: none 00:19:31.119 08:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:31.119 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:31.378 ===================================================== 00:19:31.378 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:31.378 ===================================================== 00:19:31.378 Controller Capabilities/Features 00:19:31.378 ================================ 00:19:31.378 Vendor ID: 0000 00:19:31.378 Subsystem Vendor ID: 0000 00:19:31.378 Serial Number: f6b98942eafd3ebba761 00:19:31.378 Model Number: Linux 00:19:31.378 Firmware Version: 6.8.9-20 00:19:31.378 Recommended Arb Burst: 0 00:19:31.378 IEEE OUI Identifier: 00 00 00 00:19:31.378 Multi-path I/O 00:19:31.378 May have multiple subsystem ports: No 00:19:31.378 May have multiple controllers: No 00:19:31.378 Associated with SR-IOV VF: No 00:19:31.378 Max Data Transfer Size: Unlimited 00:19:31.379 Max Number of Namespaces: 0 00:19:31.379 Max Number of I/O Queues: 1024 00:19:31.379 NVMe Specification Version (VS): 1.3 00:19:31.379 NVMe Specification Version (Identify): 1.3 00:19:31.379 Maximum Queue Entries: 1024 00:19:31.379 Contiguous Queues Required: No 00:19:31.379 Arbitration Mechanisms Supported 00:19:31.379 Weighted Round Robin: Not Supported 00:19:31.379 Vendor Specific: Not Supported 00:19:31.379 Reset Timeout: 7500 ms 00:19:31.379 Doorbell Stride: 4 bytes 00:19:31.379 NVM Subsystem Reset: Not Supported 00:19:31.379 Command Sets Supported 00:19:31.379 NVM Command Set: Supported 00:19:31.379 Boot Partition: Not Supported 00:19:31.379 Memory Page Size Minimum: 4096 bytes 00:19:31.379 Memory Page Size Maximum: 4096 bytes 00:19:31.379 Persistent Memory Region: Not Supported 00:19:31.379 Optional Asynchronous Events Supported 00:19:31.379 Namespace Attribute Notices: Not Supported 00:19:31.379 Firmware Activation Notices: Not Supported 00:19:31.379 ANA Change Notices: Not Supported 00:19:31.379 PLE Aggregate Log Change Notices: Not Supported 00:19:31.379 LBA Status Info Alert Notices: Not Supported 00:19:31.379 EGE Aggregate Log Change Notices: Not Supported 00:19:31.379 Normal NVM Subsystem Shutdown event: Not Supported 00:19:31.379 Zone Descriptor Change Notices: Not Supported 00:19:31.379 Discovery Log Change Notices: Supported 00:19:31.379 Controller Attributes 00:19:31.379 128-bit Host Identifier: Not Supported 00:19:31.379 Non-Operational Permissive Mode: Not Supported 00:19:31.379 NVM Sets: Not Supported 00:19:31.379 Read Recovery Levels: Not Supported 00:19:31.379 Endurance Groups: Not Supported 00:19:31.379 Predictable Latency Mode: Not Supported 00:19:31.379 Traffic Based Keep ALive: Not Supported 00:19:31.379 Namespace Granularity: Not Supported 00:19:31.379 SQ Associations: Not Supported 00:19:31.379 UUID List: Not Supported 00:19:31.379 Multi-Domain Subsystem: Not Supported 00:19:31.379 Fixed Capacity Management: Not Supported 00:19:31.379 Variable Capacity Management: Not Supported 00:19:31.379 Delete Endurance Group: Not Supported 00:19:31.379 Delete NVM Set: Not Supported 00:19:31.379 Extended LBA Formats Supported: Not Supported 00:19:31.379 Flexible Data Placement Supported: Not Supported 00:19:31.379 00:19:31.379 Controller Memory Buffer Support 00:19:31.379 ================================ 00:19:31.379 Supported: No 00:19:31.379 00:19:31.379 Persistent Memory Region Support 00:19:31.379 ================================ 00:19:31.379 Supported: No 00:19:31.379 00:19:31.379 Admin Command Set Attributes 00:19:31.379 ============================ 00:19:31.379 Security Send/Receive: Not Supported 00:19:31.379 Format NVM: Not Supported 00:19:31.379 Firmware Activate/Download: Not Supported 00:19:31.379 Namespace Management: Not Supported 00:19:31.379 Device Self-Test: Not Supported 00:19:31.379 Directives: Not Supported 00:19:31.379 NVMe-MI: Not Supported 00:19:31.379 Virtualization Management: Not Supported 00:19:31.379 Doorbell Buffer Config: Not Supported 00:19:31.379 Get LBA Status Capability: Not Supported 00:19:31.379 Command & Feature Lockdown Capability: Not Supported 00:19:31.379 Abort Command Limit: 1 00:19:31.379 Async Event Request Limit: 1 00:19:31.379 Number of Firmware Slots: N/A 00:19:31.379 Firmware Slot 1 Read-Only: N/A 00:19:31.379 Firmware Activation Without Reset: N/A 00:19:31.379 Multiple Update Detection Support: N/A 00:19:31.379 Firmware Update Granularity: No Information Provided 00:19:31.379 Per-Namespace SMART Log: No 00:19:31.379 Asymmetric Namespace Access Log Page: Not Supported 00:19:31.379 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:31.379 Command Effects Log Page: Not Supported 00:19:31.379 Get Log Page Extended Data: Supported 00:19:31.379 Telemetry Log Pages: Not Supported 00:19:31.379 Persistent Event Log Pages: Not Supported 00:19:31.379 Supported Log Pages Log Page: May Support 00:19:31.379 Commands Supported & Effects Log Page: Not Supported 00:19:31.379 Feature Identifiers & Effects Log Page:May Support 00:19:31.379 NVMe-MI Commands & Effects Log Page: May Support 00:19:31.379 Data Area 4 for Telemetry Log: Not Supported 00:19:31.379 Error Log Page Entries Supported: 1 00:19:31.379 Keep Alive: Not Supported 00:19:31.379 00:19:31.379 NVM Command Set Attributes 00:19:31.379 ========================== 00:19:31.379 Submission Queue Entry Size 00:19:31.379 Max: 1 00:19:31.379 Min: 1 00:19:31.379 Completion Queue Entry Size 00:19:31.379 Max: 1 00:19:31.379 Min: 1 00:19:31.379 Number of Namespaces: 0 00:19:31.379 Compare Command: Not Supported 00:19:31.379 Write Uncorrectable Command: Not Supported 00:19:31.379 Dataset Management Command: Not Supported 00:19:31.379 Write Zeroes Command: Not Supported 00:19:31.379 Set Features Save Field: Not Supported 00:19:31.379 Reservations: Not Supported 00:19:31.379 Timestamp: Not Supported 00:19:31.379 Copy: Not Supported 00:19:31.379 Volatile Write Cache: Not Present 00:19:31.379 Atomic Write Unit (Normal): 1 00:19:31.379 Atomic Write Unit (PFail): 1 00:19:31.379 Atomic Compare & Write Unit: 1 00:19:31.379 Fused Compare & Write: Not Supported 00:19:31.379 Scatter-Gather List 00:19:31.379 SGL Command Set: Supported 00:19:31.379 SGL Keyed: Not Supported 00:19:31.379 SGL Bit Bucket Descriptor: Not Supported 00:19:31.379 SGL Metadata Pointer: Not Supported 00:19:31.379 Oversized SGL: Not Supported 00:19:31.379 SGL Metadata Address: Not Supported 00:19:31.379 SGL Offset: Supported 00:19:31.379 Transport SGL Data Block: Not Supported 00:19:31.379 Replay Protected Memory Block: Not Supported 00:19:31.379 00:19:31.379 Firmware Slot Information 00:19:31.379 ========================= 00:19:31.379 Active slot: 0 00:19:31.379 00:19:31.379 00:19:31.379 Error Log 00:19:31.379 ========= 00:19:31.379 00:19:31.379 Active Namespaces 00:19:31.379 ================= 00:19:31.379 Discovery Log Page 00:19:31.379 ================== 00:19:31.379 Generation Counter: 2 00:19:31.379 Number of Records: 2 00:19:31.379 Record Format: 0 00:19:31.379 00:19:31.379 Discovery Log Entry 0 00:19:31.379 ---------------------- 00:19:31.379 Transport Type: 3 (TCP) 00:19:31.379 Address Family: 1 (IPv4) 00:19:31.379 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:31.379 Entry Flags: 00:19:31.379 Duplicate Returned Information: 0 00:19:31.379 Explicit Persistent Connection Support for Discovery: 0 00:19:31.379 Transport Requirements: 00:19:31.379 Secure Channel: Not Specified 00:19:31.379 Port ID: 1 (0x0001) 00:19:31.379 Controller ID: 65535 (0xffff) 00:19:31.379 Admin Max SQ Size: 32 00:19:31.379 Transport Service Identifier: 4420 00:19:31.379 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:31.379 Transport Address: 10.0.0.1 00:19:31.379 Discovery Log Entry 1 00:19:31.379 ---------------------- 00:19:31.379 Transport Type: 3 (TCP) 00:19:31.379 Address Family: 1 (IPv4) 00:19:31.379 Subsystem Type: 2 (NVM Subsystem) 00:19:31.379 Entry Flags: 00:19:31.379 Duplicate Returned Information: 0 00:19:31.380 Explicit Persistent Connection Support for Discovery: 0 00:19:31.380 Transport Requirements: 00:19:31.380 Secure Channel: Not Specified 00:19:31.380 Port ID: 1 (0x0001) 00:19:31.380 Controller ID: 65535 (0xffff) 00:19:31.380 Admin Max SQ Size: 32 00:19:31.380 Transport Service Identifier: 4420 00:19:31.380 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:31.380 Transport Address: 10.0.0.1 00:19:31.380 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:31.380 get_feature(0x01) failed 00:19:31.380 get_feature(0x02) failed 00:19:31.380 get_feature(0x04) failed 00:19:31.380 ===================================================== 00:19:31.380 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:31.380 ===================================================== 00:19:31.380 Controller Capabilities/Features 00:19:31.380 ================================ 00:19:31.380 Vendor ID: 0000 00:19:31.380 Subsystem Vendor ID: 0000 00:19:31.380 Serial Number: 3e3bf32e6919f8b0d672 00:19:31.380 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:31.380 Firmware Version: 6.8.9-20 00:19:31.380 Recommended Arb Burst: 6 00:19:31.380 IEEE OUI Identifier: 00 00 00 00:19:31.380 Multi-path I/O 00:19:31.380 May have multiple subsystem ports: Yes 00:19:31.380 May have multiple controllers: Yes 00:19:31.380 Associated with SR-IOV VF: No 00:19:31.380 Max Data Transfer Size: Unlimited 00:19:31.380 Max Number of Namespaces: 1024 00:19:31.380 Max Number of I/O Queues: 128 00:19:31.380 NVMe Specification Version (VS): 1.3 00:19:31.380 NVMe Specification Version (Identify): 1.3 00:19:31.380 Maximum Queue Entries: 1024 00:19:31.380 Contiguous Queues Required: No 00:19:31.380 Arbitration Mechanisms Supported 00:19:31.380 Weighted Round Robin: Not Supported 00:19:31.380 Vendor Specific: Not Supported 00:19:31.380 Reset Timeout: 7500 ms 00:19:31.380 Doorbell Stride: 4 bytes 00:19:31.380 NVM Subsystem Reset: Not Supported 00:19:31.380 Command Sets Supported 00:19:31.380 NVM Command Set: Supported 00:19:31.380 Boot Partition: Not Supported 00:19:31.380 Memory Page Size Minimum: 4096 bytes 00:19:31.380 Memory Page Size Maximum: 4096 bytes 00:19:31.380 Persistent Memory Region: Not Supported 00:19:31.380 Optional Asynchronous Events Supported 00:19:31.380 Namespace Attribute Notices: Supported 00:19:31.380 Firmware Activation Notices: Not Supported 00:19:31.380 ANA Change Notices: Supported 00:19:31.380 PLE Aggregate Log Change Notices: Not Supported 00:19:31.380 LBA Status Info Alert Notices: Not Supported 00:19:31.380 EGE Aggregate Log Change Notices: Not Supported 00:19:31.380 Normal NVM Subsystem Shutdown event: Not Supported 00:19:31.380 Zone Descriptor Change Notices: Not Supported 00:19:31.380 Discovery Log Change Notices: Not Supported 00:19:31.380 Controller Attributes 00:19:31.380 128-bit Host Identifier: Supported 00:19:31.380 Non-Operational Permissive Mode: Not Supported 00:19:31.380 NVM Sets: Not Supported 00:19:31.380 Read Recovery Levels: Not Supported 00:19:31.380 Endurance Groups: Not Supported 00:19:31.380 Predictable Latency Mode: Not Supported 00:19:31.380 Traffic Based Keep ALive: Supported 00:19:31.380 Namespace Granularity: Not Supported 00:19:31.380 SQ Associations: Not Supported 00:19:31.380 UUID List: Not Supported 00:19:31.380 Multi-Domain Subsystem: Not Supported 00:19:31.380 Fixed Capacity Management: Not Supported 00:19:31.380 Variable Capacity Management: Not Supported 00:19:31.380 Delete Endurance Group: Not Supported 00:19:31.380 Delete NVM Set: Not Supported 00:19:31.380 Extended LBA Formats Supported: Not Supported 00:19:31.380 Flexible Data Placement Supported: Not Supported 00:19:31.380 00:19:31.380 Controller Memory Buffer Support 00:19:31.380 ================================ 00:19:31.380 Supported: No 00:19:31.380 00:19:31.380 Persistent Memory Region Support 00:19:31.380 ================================ 00:19:31.380 Supported: No 00:19:31.380 00:19:31.380 Admin Command Set Attributes 00:19:31.380 ============================ 00:19:31.380 Security Send/Receive: Not Supported 00:19:31.380 Format NVM: Not Supported 00:19:31.380 Firmware Activate/Download: Not Supported 00:19:31.380 Namespace Management: Not Supported 00:19:31.380 Device Self-Test: Not Supported 00:19:31.380 Directives: Not Supported 00:19:31.380 NVMe-MI: Not Supported 00:19:31.380 Virtualization Management: Not Supported 00:19:31.380 Doorbell Buffer Config: Not Supported 00:19:31.380 Get LBA Status Capability: Not Supported 00:19:31.380 Command & Feature Lockdown Capability: Not Supported 00:19:31.380 Abort Command Limit: 4 00:19:31.380 Async Event Request Limit: 4 00:19:31.380 Number of Firmware Slots: N/A 00:19:31.380 Firmware Slot 1 Read-Only: N/A 00:19:31.380 Firmware Activation Without Reset: N/A 00:19:31.380 Multiple Update Detection Support: N/A 00:19:31.380 Firmware Update Granularity: No Information Provided 00:19:31.380 Per-Namespace SMART Log: Yes 00:19:31.380 Asymmetric Namespace Access Log Page: Supported 00:19:31.380 ANA Transition Time : 10 sec 00:19:31.380 00:19:31.380 Asymmetric Namespace Access Capabilities 00:19:31.380 ANA Optimized State : Supported 00:19:31.380 ANA Non-Optimized State : Supported 00:19:31.380 ANA Inaccessible State : Supported 00:19:31.380 ANA Persistent Loss State : Supported 00:19:31.380 ANA Change State : Supported 00:19:31.380 ANAGRPID is not changed : No 00:19:31.380 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:31.380 00:19:31.380 ANA Group Identifier Maximum : 128 00:19:31.380 Number of ANA Group Identifiers : 128 00:19:31.380 Max Number of Allowed Namespaces : 1024 00:19:31.380 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:31.380 Command Effects Log Page: Supported 00:19:31.380 Get Log Page Extended Data: Supported 00:19:31.380 Telemetry Log Pages: Not Supported 00:19:31.380 Persistent Event Log Pages: Not Supported 00:19:31.380 Supported Log Pages Log Page: May Support 00:19:31.380 Commands Supported & Effects Log Page: Not Supported 00:19:31.380 Feature Identifiers & Effects Log Page:May Support 00:19:31.380 NVMe-MI Commands & Effects Log Page: May Support 00:19:31.380 Data Area 4 for Telemetry Log: Not Supported 00:19:31.380 Error Log Page Entries Supported: 128 00:19:31.380 Keep Alive: Supported 00:19:31.380 Keep Alive Granularity: 1000 ms 00:19:31.380 00:19:31.380 NVM Command Set Attributes 00:19:31.380 ========================== 00:19:31.380 Submission Queue Entry Size 00:19:31.380 Max: 64 00:19:31.380 Min: 64 00:19:31.380 Completion Queue Entry Size 00:19:31.380 Max: 16 00:19:31.380 Min: 16 00:19:31.380 Number of Namespaces: 1024 00:19:31.380 Compare Command: Not Supported 00:19:31.380 Write Uncorrectable Command: Not Supported 00:19:31.380 Dataset Management Command: Supported 00:19:31.380 Write Zeroes Command: Supported 00:19:31.380 Set Features Save Field: Not Supported 00:19:31.380 Reservations: Not Supported 00:19:31.380 Timestamp: Not Supported 00:19:31.380 Copy: Not Supported 00:19:31.380 Volatile Write Cache: Present 00:19:31.380 Atomic Write Unit (Normal): 1 00:19:31.380 Atomic Write Unit (PFail): 1 00:19:31.380 Atomic Compare & Write Unit: 1 00:19:31.380 Fused Compare & Write: Not Supported 00:19:31.380 Scatter-Gather List 00:19:31.381 SGL Command Set: Supported 00:19:31.381 SGL Keyed: Not Supported 00:19:31.381 SGL Bit Bucket Descriptor: Not Supported 00:19:31.381 SGL Metadata Pointer: Not Supported 00:19:31.381 Oversized SGL: Not Supported 00:19:31.381 SGL Metadata Address: Not Supported 00:19:31.381 SGL Offset: Supported 00:19:31.381 Transport SGL Data Block: Not Supported 00:19:31.381 Replay Protected Memory Block: Not Supported 00:19:31.381 00:19:31.381 Firmware Slot Information 00:19:31.381 ========================= 00:19:31.381 Active slot: 0 00:19:31.381 00:19:31.381 Asymmetric Namespace Access 00:19:31.381 =========================== 00:19:31.381 Change Count : 0 00:19:31.381 Number of ANA Group Descriptors : 1 00:19:31.381 ANA Group Descriptor : 0 00:19:31.381 ANA Group ID : 1 00:19:31.381 Number of NSID Values : 1 00:19:31.381 Change Count : 0 00:19:31.381 ANA State : 1 00:19:31.381 Namespace Identifier : 1 00:19:31.381 00:19:31.381 Commands Supported and Effects 00:19:31.381 ============================== 00:19:31.381 Admin Commands 00:19:31.381 -------------- 00:19:31.381 Get Log Page (02h): Supported 00:19:31.381 Identify (06h): Supported 00:19:31.381 Abort (08h): Supported 00:19:31.381 Set Features (09h): Supported 00:19:31.381 Get Features (0Ah): Supported 00:19:31.381 Asynchronous Event Request (0Ch): Supported 00:19:31.381 Keep Alive (18h): Supported 00:19:31.381 I/O Commands 00:19:31.381 ------------ 00:19:31.381 Flush (00h): Supported 00:19:31.381 Write (01h): Supported LBA-Change 00:19:31.381 Read (02h): Supported 00:19:31.381 Write Zeroes (08h): Supported LBA-Change 00:19:31.381 Dataset Management (09h): Supported 00:19:31.381 00:19:31.381 Error Log 00:19:31.381 ========= 00:19:31.381 Entry: 0 00:19:31.381 Error Count: 0x3 00:19:31.381 Submission Queue Id: 0x0 00:19:31.381 Command Id: 0x5 00:19:31.381 Phase Bit: 0 00:19:31.381 Status Code: 0x2 00:19:31.381 Status Code Type: 0x0 00:19:31.381 Do Not Retry: 1 00:19:31.640 Error Location: 0x28 00:19:31.640 LBA: 0x0 00:19:31.640 Namespace: 0x0 00:19:31.640 Vendor Log Page: 0x0 00:19:31.640 ----------- 00:19:31.640 Entry: 1 00:19:31.640 Error Count: 0x2 00:19:31.640 Submission Queue Id: 0x0 00:19:31.640 Command Id: 0x5 00:19:31.640 Phase Bit: 0 00:19:31.640 Status Code: 0x2 00:19:31.640 Status Code Type: 0x0 00:19:31.640 Do Not Retry: 1 00:19:31.640 Error Location: 0x28 00:19:31.640 LBA: 0x0 00:19:31.640 Namespace: 0x0 00:19:31.640 Vendor Log Page: 0x0 00:19:31.640 ----------- 00:19:31.640 Entry: 2 00:19:31.640 Error Count: 0x1 00:19:31.640 Submission Queue Id: 0x0 00:19:31.640 Command Id: 0x4 00:19:31.640 Phase Bit: 0 00:19:31.640 Status Code: 0x2 00:19:31.640 Status Code Type: 0x0 00:19:31.640 Do Not Retry: 1 00:19:31.640 Error Location: 0x28 00:19:31.640 LBA: 0x0 00:19:31.640 Namespace: 0x0 00:19:31.640 Vendor Log Page: 0x0 00:19:31.640 00:19:31.640 Number of Queues 00:19:31.640 ================ 00:19:31.640 Number of I/O Submission Queues: 128 00:19:31.640 Number of I/O Completion Queues: 128 00:19:31.640 00:19:31.640 ZNS Specific Controller Data 00:19:31.640 ============================ 00:19:31.641 Zone Append Size Limit: 0 00:19:31.641 00:19:31.641 00:19:31.641 Active Namespaces 00:19:31.641 ================= 00:19:31.641 get_feature(0x05) failed 00:19:31.641 Namespace ID:1 00:19:31.641 Command Set Identifier: NVM (00h) 00:19:31.641 Deallocate: Supported 00:19:31.641 Deallocated/Unwritten Error: Not Supported 00:19:31.641 Deallocated Read Value: Unknown 00:19:31.641 Deallocate in Write Zeroes: Not Supported 00:19:31.641 Deallocated Guard Field: 0xFFFF 00:19:31.641 Flush: Supported 00:19:31.641 Reservation: Not Supported 00:19:31.641 Namespace Sharing Capabilities: Multiple Controllers 00:19:31.641 Size (in LBAs): 1310720 (5GiB) 00:19:31.641 Capacity (in LBAs): 1310720 (5GiB) 00:19:31.641 Utilization (in LBAs): 1310720 (5GiB) 00:19:31.641 UUID: ed4984b9-14ba-4bac-9c63-0b348dfdce30 00:19:31.641 Thin Provisioning: Not Supported 00:19:31.641 Per-NS Atomic Units: Yes 00:19:31.641 Atomic Boundary Size (Normal): 0 00:19:31.641 Atomic Boundary Size (PFail): 0 00:19:31.641 Atomic Boundary Offset: 0 00:19:31.641 NGUID/EUI64 Never Reused: No 00:19:31.641 ANA group ID: 1 00:19:31.641 Namespace Write Protected: No 00:19:31.641 Number of LBA Formats: 1 00:19:31.641 Current LBA Format: LBA Format #00 00:19:31.641 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:31.641 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.641 rmmod nvme_tcp 00:19:31.641 rmmod nvme_fabrics 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:31.641 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:31.901 08:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:32.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:32.728 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:32.728 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:32.728 00:19:32.728 real 0m3.283s 00:19:32.728 user 0m1.113s 00:19:32.728 sys 0m1.496s 00:19:32.728 08:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.728 ************************************ 00:19:32.728 08:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.728 END TEST nvmf_identify_kernel_target 00:19:32.728 ************************************ 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.989 ************************************ 00:19:32.989 START TEST nvmf_auth_host 00:19:32.989 ************************************ 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:32.989 * Looking for test storage... 00:19:32.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:32.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.989 --rc genhtml_branch_coverage=1 00:19:32.989 --rc genhtml_function_coverage=1 00:19:32.989 --rc genhtml_legend=1 00:19:32.989 --rc geninfo_all_blocks=1 00:19:32.989 --rc geninfo_unexecuted_blocks=1 00:19:32.989 00:19:32.989 ' 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:32.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.989 --rc genhtml_branch_coverage=1 00:19:32.989 --rc genhtml_function_coverage=1 00:19:32.989 --rc genhtml_legend=1 00:19:32.989 --rc geninfo_all_blocks=1 00:19:32.989 --rc geninfo_unexecuted_blocks=1 00:19:32.989 00:19:32.989 ' 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:32.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.989 --rc genhtml_branch_coverage=1 00:19:32.989 --rc genhtml_function_coverage=1 00:19:32.989 --rc genhtml_legend=1 00:19:32.989 --rc geninfo_all_blocks=1 00:19:32.989 --rc geninfo_unexecuted_blocks=1 00:19:32.989 00:19:32.989 ' 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:32.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.989 --rc genhtml_branch_coverage=1 00:19:32.989 --rc genhtml_function_coverage=1 00:19:32.989 --rc genhtml_legend=1 00:19:32.989 --rc geninfo_all_blocks=1 00:19:32.989 --rc geninfo_unexecuted_blocks=1 00:19:32.989 00:19:32.989 ' 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.989 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.990 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:32.990 Cannot find device "nvmf_init_br" 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:32.990 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:33.249 Cannot find device "nvmf_init_br2" 00:19:33.249 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:33.249 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:33.249 Cannot find device "nvmf_tgt_br" 00:19:33.249 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:19:33.249 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.249 Cannot find device "nvmf_tgt_br2" 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:33.250 Cannot find device "nvmf_init_br" 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:33.250 Cannot find device "nvmf_init_br2" 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:33.250 Cannot find device "nvmf_tgt_br" 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:33.250 Cannot find device "nvmf_tgt_br2" 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:33.250 Cannot find device "nvmf_br" 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:19:33.250 08:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:33.250 Cannot find device "nvmf_init_if" 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:33.250 Cannot find device "nvmf_init_if2" 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:33.250 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:33.509 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:33.509 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:19:33.509 00:19:33.509 --- 10.0.0.3 ping statistics --- 00:19:33.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.509 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:33.509 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:33.509 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:19:33.509 00:19:33.509 --- 10.0.0.4 ping statistics --- 00:19:33.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.509 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:33.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:33.509 00:19:33.509 --- 10.0.0.1 ping statistics --- 00:19:33.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.509 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:33.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:33.509 00:19:33.509 --- 10.0.0.2 ping statistics --- 00:19:33.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.509 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:33.509 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:33.510 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.510 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.510 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78878 00:19:33.510 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:33.510 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78878 00:19:33.510 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78878 ']' 00:19:33.510 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.510 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.510 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.510 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.510 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=de8b81ac4a69c21e95d23b56b16f5340 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lSr 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key de8b81ac4a69c21e95d23b56b16f5340 0 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 de8b81ac4a69c21e95d23b56b16f5340 0 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=de8b81ac4a69c21e95d23b56b16f5340 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lSr 00:19:34.077 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lSr 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.lSr 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0812e6fb70bd02213b1db36e56e2a6e6f3f9178f0bd7d0ff76e3084ed67c1fc1 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.l1T 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0812e6fb70bd02213b1db36e56e2a6e6f3f9178f0bd7d0ff76e3084ed67c1fc1 3 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0812e6fb70bd02213b1db36e56e2a6e6f3f9178f0bd7d0ff76e3084ed67c1fc1 3 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0812e6fb70bd02213b1db36e56e2a6e6f3f9178f0bd7d0ff76e3084ed67c1fc1 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.l1T 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.l1T 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.l1T 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1d38f7ebf37668a583752fe662627ee32dfca0055c9d461f 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tFI 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1d38f7ebf37668a583752fe662627ee32dfca0055c9d461f 0 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1d38f7ebf37668a583752fe662627ee32dfca0055c9d461f 0 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1d38f7ebf37668a583752fe662627ee32dfca0055c9d461f 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:34.078 08:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tFI 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tFI 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.tFI 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=440f61c845a606324aa8a74a04f909d7f926771808cb5159 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kDn 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 440f61c845a606324aa8a74a04f909d7f926771808cb5159 2 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 440f61c845a606324aa8a74a04f909d7f926771808cb5159 2 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=440f61c845a606324aa8a74a04f909d7f926771808cb5159 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kDn 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kDn 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.kDn 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9b77d6df303e54a86e14bfd2da1afeb2 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RGY 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9b77d6df303e54a86e14bfd2da1afeb2 1 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9b77d6df303e54a86e14bfd2da1afeb2 1 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9b77d6df303e54a86e14bfd2da1afeb2 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RGY 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RGY 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.RGY 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=959f911121d5b24b4fa8301037430ce8 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TmL 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 959f911121d5b24b4fa8301037430ce8 1 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 959f911121d5b24b4fa8301037430ce8 1 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=959f911121d5b24b4fa8301037430ce8 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TmL 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TmL 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.TmL 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:34.338 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9ac55e3d1c9888156ba444f9181b4883fa148ac8ab9b8fb5 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.llf 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9ac55e3d1c9888156ba444f9181b4883fa148ac8ab9b8fb5 2 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9ac55e3d1c9888156ba444f9181b4883fa148ac8ab9b8fb5 2 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9ac55e3d1c9888156ba444f9181b4883fa148ac8ab9b8fb5 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:34.339 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.llf 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.llf 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.llf 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ab881ff6c4dcec882a949c3bfbd21836 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8d2 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ab881ff6c4dcec882a949c3bfbd21836 0 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ab881ff6c4dcec882a949c3bfbd21836 0 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ab881ff6c4dcec882a949c3bfbd21836 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8d2 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8d2 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.8d2 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=18c772f94e8f254a2936e78e853ec8b1992de97170c5cdc2e5bb3594b7ea19a1 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.6ZL 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 18c772f94e8f254a2936e78e853ec8b1992de97170c5cdc2e5bb3594b7ea19a1 3 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 18c772f94e8f254a2936e78e853ec8b1992de97170c5cdc2e5bb3594b7ea19a1 3 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=18c772f94e8f254a2936e78e853ec8b1992de97170c5cdc2e5bb3594b7ea19a1 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.6ZL 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.6ZL 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.6ZL 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78878 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78878 ']' 00:19:34.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.598 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lSr 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.l1T ]] 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.l1T 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.tFI 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.kDn ]] 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kDn 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.RGY 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.858 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.TmL ]] 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TmL 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.llf 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.8d2 ]] 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.8d2 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6ZL 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:35.118 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:35.119 08:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:35.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:35.378 Waiting for block devices as requested 00:19:35.378 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:35.637 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:36.204 No valid GPT data, bailing 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:36.204 08:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:36.204 No valid GPT data, bailing 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:36.204 No valid GPT data, bailing 00:19:36.204 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:36.464 No valid GPT data, bailing 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -a 10.0.0.1 -t tcp -s 4420 00:19:36.464 00:19:36.464 Discovery Log Number of Records 2, Generation counter 2 00:19:36.464 =====Discovery Log Entry 0====== 00:19:36.464 trtype: tcp 00:19:36.464 adrfam: ipv4 00:19:36.464 subtype: current discovery subsystem 00:19:36.464 treq: not specified, sq flow control disable supported 00:19:36.464 portid: 1 00:19:36.464 trsvcid: 4420 00:19:36.464 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:36.464 traddr: 10.0.0.1 00:19:36.464 eflags: none 00:19:36.464 sectype: none 00:19:36.464 =====Discovery Log Entry 1====== 00:19:36.464 trtype: tcp 00:19:36.464 adrfam: ipv4 00:19:36.464 subtype: nvme subsystem 00:19:36.464 treq: not specified, sq flow control disable supported 00:19:36.464 portid: 1 00:19:36.464 trsvcid: 4420 00:19:36.464 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:36.464 traddr: 10.0.0.1 00:19:36.464 eflags: none 00:19:36.464 sectype: none 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:36.464 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.724 nvme0n1 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:36.724 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.725 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.984 nvme0n1 00:19:36.984 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.984 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.984 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.984 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.984 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.984 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.984 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.984 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.984 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.985 nvme0n1 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.985 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.244 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.245 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.245 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.245 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.245 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.245 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.245 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.245 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.245 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.245 08:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.245 nvme0n1 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.245 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.504 nvme0n1 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.504 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.505 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.764 nvme0n1 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.764 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.023 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.024 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.024 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.024 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.024 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.024 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.024 nvme0n1 00:19:38.024 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.024 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.024 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.024 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.024 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.283 08:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.283 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.284 nvme0n1 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.284 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.542 nvme0n1 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.542 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.801 nvme0n1 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.801 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.802 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.061 nvme0n1 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:39.061 08:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.630 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.889 nvme0n1 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.889 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.148 nvme0n1 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.148 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.149 08:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.408 nvme0n1 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.408 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.667 nvme0n1 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:40.667 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.668 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.926 nvme0n1 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:40.926 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:40.927 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:40.927 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:40.927 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:40.927 08:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.832 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.833 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.833 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.833 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.091 nvme0n1 00:19:43.091 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.091 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.091 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.091 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.091 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.091 08:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.091 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.091 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.091 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.091 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.349 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.349 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.350 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.609 nvme0n1 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.609 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.610 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.610 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.610 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.610 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.610 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.610 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.610 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.610 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.610 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.610 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.177 nvme0n1 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.177 08:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.436 nvme0n1 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.436 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.003 nvme0n1 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.003 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.004 08:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.572 nvme0n1 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.572 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.140 nvme0n1 00:19:46.140 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.140 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.140 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.140 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.140 08:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.140 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.140 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.140 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.140 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.140 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.400 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.968 nvme0n1 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:46.968 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.969 08:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.559 nvme0n1 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.559 08:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.126 nvme0n1 00:19:48.126 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.126 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.126 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.126 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.126 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:48.385 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.386 nvme0n1 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.386 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.646 nvme0n1 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.646 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.906 nvme0n1 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.906 nvme0n1 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.906 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:49.166 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.167 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.167 nvme0n1 00:19:49.167 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.167 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.167 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.167 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.167 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.167 08:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.167 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.427 nvme0n1 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.427 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.686 nvme0n1 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.686 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.687 nvme0n1 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.687 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.946 nvme0n1 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.946 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:50.206 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.207 08:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.207 nvme0n1 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.207 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.467 nvme0n1 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:50.467 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:50.725 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.726 nvme0n1 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.726 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.984 nvme0n1 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.984 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.244 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.245 08:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.245 nvme0n1 00:19:51.245 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.245 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.245 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.245 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.245 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.245 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.504 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.764 nvme0n1 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:51.764 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.765 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.106 nvme0n1 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.106 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.107 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.107 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.107 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.107 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.107 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.107 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.107 08:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.404 nvme0n1 00:19:52.404 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.404 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.404 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.404 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.404 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.404 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.663 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.664 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.664 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.664 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.922 nvme0n1 00:19:52.922 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.922 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.922 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.922 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.922 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.922 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.922 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.922 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.922 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.922 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.923 08:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.491 nvme0n1 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.491 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.750 nvme0n1 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.750 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.008 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.009 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.009 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.009 08:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.576 nvme0n1 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:54.576 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.577 08:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.143 nvme0n1 00:19:55.143 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.143 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.143 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.143 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.143 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.143 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.402 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.403 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.403 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.403 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.970 nvme0n1 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.970 08:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.537 nvme0n1 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:56.537 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.794 08:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.359 nvme0n1 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.359 nvme0n1 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.359 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:57.616 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.617 nvme0n1 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.617 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.875 nvme0n1 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.875 nvme0n1 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.875 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.134 nvme0n1 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.134 08:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.134 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.134 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.134 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.134 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.134 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.393 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.394 nvme0n1 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.394 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.652 nvme0n1 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.652 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.653 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.911 nvme0n1 00:19:58.911 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.911 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.912 nvme0n1 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.912 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.171 08:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.171 nvme0n1 00:19:59.171 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.171 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.171 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.171 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.171 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.171 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.171 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.171 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.171 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.172 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.430 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.431 nvme0n1 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.431 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.690 nvme0n1 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.690 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.949 nvme0n1 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.949 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.209 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.210 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:00.210 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.210 08:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.210 nvme0n1 00:20:00.210 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.210 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.210 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.210 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.210 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.469 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.470 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.729 nvme0n1 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.729 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.989 nvme0n1 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.989 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.248 08:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.507 nvme0n1 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:20:01.507 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.508 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.076 nvme0n1 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.076 08:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.335 nvme0n1 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.335 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.336 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:02.336 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.336 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.906 nvme0n1 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4YjgxYWM0YTY5YzIxZTk1ZDIzYjU2YjE2ZjUzNDDs6hXN: 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: ]] 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDgxMmU2ZmI3MGJkMDIyMTNiMWRiMzZlNTZlMmE2ZTZmM2Y5MTc4ZjBiZDdkMGZmNzZlMzA4NGVkNjdjMWZjMX2NhhM=: 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.906 08:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.478 nvme0n1 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.478 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.044 nvme0n1 00:20:04.044 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.044 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.044 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.044 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.044 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.044 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:20:04.304 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.305 08:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.305 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.900 nvme0n1 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWFjNTVlM2QxYzk4ODgxNTZiYTQ0NGY5MTgxYjQ4ODNmYTE0OGFjOGFiOWI4ZmI1ou5lcQ==: 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: ]] 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI4ODFmZjZjNGRjZWM4ODJhOTQ5YzNiZmJkMjE4MzYRFsiL: 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.900 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.901 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.901 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.901 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.901 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.901 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.901 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.901 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.901 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.901 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:04.901 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.901 08:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.468 nvme0n1 00:20:05.468 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.468 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.468 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.468 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.468 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.468 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThjNzcyZjk0ZThmMjU0YTI5MzZlNzhlODUzZWM4YjE5OTJkZTk3MTcwYzVjZGMyZTViYjM1OTRiN2VhMTlhMQLo96E=: 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.469 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.405 nvme0n1 00:20:06.405 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.405 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.405 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.405 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.405 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.405 08:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:20:06.405 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.406 request: 00:20:06.406 { 00:20:06.406 "name": "nvme0", 00:20:06.406 "trtype": "tcp", 00:20:06.406 "traddr": "10.0.0.1", 00:20:06.406 "adrfam": "ipv4", 00:20:06.406 "trsvcid": "4420", 00:20:06.406 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:06.406 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:06.406 "prchk_reftag": false, 00:20:06.406 "prchk_guard": false, 00:20:06.406 "hdgst": false, 00:20:06.406 "ddgst": false, 00:20:06.406 "allow_unrecognized_csi": false, 00:20:06.406 "method": "bdev_nvme_attach_controller", 00:20:06.406 "req_id": 1 00:20:06.406 } 00:20:06.406 Got JSON-RPC error response 00:20:06.406 response: 00:20:06.406 { 00:20:06.406 "code": -5, 00:20:06.406 "message": "Input/output error" 00:20:06.406 } 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.406 request: 00:20:06.406 { 00:20:06.406 "name": "nvme0", 00:20:06.406 "trtype": "tcp", 00:20:06.406 "traddr": "10.0.0.1", 00:20:06.406 "adrfam": "ipv4", 00:20:06.406 "trsvcid": "4420", 00:20:06.406 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:06.406 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:06.406 "prchk_reftag": false, 00:20:06.406 "prchk_guard": false, 00:20:06.406 "hdgst": false, 00:20:06.406 "ddgst": false, 00:20:06.406 "dhchap_key": "key2", 00:20:06.406 "allow_unrecognized_csi": false, 00:20:06.406 "method": "bdev_nvme_attach_controller", 00:20:06.406 "req_id": 1 00:20:06.406 } 00:20:06.406 Got JSON-RPC error response 00:20:06.406 response: 00:20:06.406 { 00:20:06.406 "code": -5, 00:20:06.406 "message": "Input/output error" 00:20:06.406 } 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.406 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.406 request: 00:20:06.407 { 00:20:06.407 "name": "nvme0", 00:20:06.407 "trtype": "tcp", 00:20:06.407 "traddr": "10.0.0.1", 00:20:06.407 "adrfam": "ipv4", 00:20:06.407 "trsvcid": "4420", 00:20:06.407 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:06.407 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:06.407 "prchk_reftag": false, 00:20:06.407 "prchk_guard": false, 00:20:06.407 "hdgst": false, 00:20:06.407 "ddgst": false, 00:20:06.407 "dhchap_key": "key1", 00:20:06.407 "dhchap_ctrlr_key": "ckey2", 00:20:06.407 "allow_unrecognized_csi": false, 00:20:06.407 "method": "bdev_nvme_attach_controller", 00:20:06.407 "req_id": 1 00:20:06.407 } 00:20:06.407 Got JSON-RPC error response 00:20:06.407 response: 00:20:06.407 { 00:20:06.407 "code": -5, 00:20:06.407 "message": "Input/output error" 00:20:06.407 } 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.407 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.665 nvme0n1 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.665 request: 00:20:06.665 { 00:20:06.665 "name": "nvme0", 00:20:06.665 "dhchap_key": "key1", 00:20:06.665 "dhchap_ctrlr_key": "ckey2", 00:20:06.665 "method": "bdev_nvme_set_keys", 00:20:06.665 "req_id": 1 00:20:06.665 } 00:20:06.665 Got JSON-RPC error response 00:20:06.665 response: 00:20:06.665 { 00:20:06.665 "code": -5, 00:20:06.665 "message": "Input/output error" 00:20:06.665 } 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:06.665 08:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWQzOGY3ZWJmMzc2NjhhNTgzNzUyZmU2NjI2MjdlZTMyZGZjYTAwNTVjOWQ0NjFm0mftzg==: 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: ]] 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQwZjYxYzg0NWE2MDYzMjRhYThhNzRhMDRmOTA5ZDdmOTI2NzcxODA4Y2I1MTU5LzYQWA==: 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.040 nvme0n1 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWI3N2Q2ZGYzMDNlNTRhODZlMTRiZmQyZGExYWZlYjIlW0QQ: 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: ]] 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5ZjkxMTEyMWQ1YjI0YjRmYTgzMDEwMzc0MzBjZTjEWQP8: 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.040 request: 00:20:08.040 { 00:20:08.040 "name": "nvme0", 00:20:08.040 "dhchap_key": "key2", 00:20:08.040 "dhchap_ctrlr_key": "ckey1", 00:20:08.040 "method": "bdev_nvme_set_keys", 00:20:08.040 "req_id": 1 00:20:08.040 } 00:20:08.040 Got JSON-RPC error response 00:20:08.040 response: 00:20:08.040 { 00:20:08.040 "code": -13, 00:20:08.040 "message": "Permission denied" 00:20:08.040 } 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:08.040 08:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:08.971 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:08.971 rmmod nvme_tcp 00:20:08.971 rmmod nvme_fabrics 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78878 ']' 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78878 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78878 ']' 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78878 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78878 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.247 killing process with pid 78878 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78878' 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78878 00:20:09.247 08:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78878 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:09.506 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:09.765 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:09.765 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:09.765 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:09.765 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:09.765 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:09.765 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:09.765 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:09.765 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:09.765 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:09.765 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:09.765 08:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:10.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:10.332 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:10.591 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:10.591 08:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.lSr /tmp/spdk.key-null.tFI /tmp/spdk.key-sha256.RGY /tmp/spdk.key-sha384.llf /tmp/spdk.key-sha512.6ZL /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:10.591 08:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:10.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:10.849 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:10.849 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:11.107 00:20:11.107 real 0m38.112s 00:20:11.107 user 0m34.533s 00:20:11.107 sys 0m3.991s 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.107 ************************************ 00:20:11.107 END TEST nvmf_auth_host 00:20:11.107 ************************************ 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.107 ************************************ 00:20:11.107 START TEST nvmf_digest 00:20:11.107 ************************************ 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:11.107 * Looking for test storage... 00:20:11.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:11.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.107 --rc genhtml_branch_coverage=1 00:20:11.107 --rc genhtml_function_coverage=1 00:20:11.107 --rc genhtml_legend=1 00:20:11.107 --rc geninfo_all_blocks=1 00:20:11.107 --rc geninfo_unexecuted_blocks=1 00:20:11.107 00:20:11.107 ' 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:11.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.107 --rc genhtml_branch_coverage=1 00:20:11.107 --rc genhtml_function_coverage=1 00:20:11.107 --rc genhtml_legend=1 00:20:11.107 --rc geninfo_all_blocks=1 00:20:11.107 --rc geninfo_unexecuted_blocks=1 00:20:11.107 00:20:11.107 ' 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:11.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.107 --rc genhtml_branch_coverage=1 00:20:11.107 --rc genhtml_function_coverage=1 00:20:11.107 --rc genhtml_legend=1 00:20:11.107 --rc geninfo_all_blocks=1 00:20:11.107 --rc geninfo_unexecuted_blocks=1 00:20:11.107 00:20:11.107 ' 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:11.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.107 --rc genhtml_branch_coverage=1 00:20:11.107 --rc genhtml_function_coverage=1 00:20:11.107 --rc genhtml_legend=1 00:20:11.107 --rc geninfo_all_blocks=1 00:20:11.107 --rc geninfo_unexecuted_blocks=1 00:20:11.107 00:20:11.107 ' 00:20:11.107 08:53:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.107 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.108 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:11.108 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.108 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:11.108 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:11.367 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:11.367 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:11.368 Cannot find device "nvmf_init_br" 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:11.368 Cannot find device "nvmf_init_br2" 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:11.368 Cannot find device "nvmf_tgt_br" 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:11.368 Cannot find device "nvmf_tgt_br2" 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:11.368 Cannot find device "nvmf_init_br" 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:11.368 Cannot find device "nvmf_init_br2" 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:11.368 Cannot find device "nvmf_tgt_br" 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:11.368 Cannot find device "nvmf_tgt_br2" 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:11.368 Cannot find device "nvmf_br" 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:11.368 Cannot find device "nvmf_init_if" 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:11.368 Cannot find device "nvmf_init_if2" 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:11.368 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:11.368 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:11.368 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:11.626 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:11.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:11.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:20:11.627 00:20:11.627 --- 10.0.0.3 ping statistics --- 00:20:11.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.627 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:11.627 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:11.627 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:20:11.627 00:20:11.627 --- 10.0.0.4 ping statistics --- 00:20:11.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.627 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:11.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:11.627 00:20:11.627 --- 10.0.0.1 ping statistics --- 00:20:11.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.627 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:11.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:20:11.627 00:20:11.627 --- 10.0.0.2 ping statistics --- 00:20:11.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.627 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:11.627 ************************************ 00:20:11.627 START TEST nvmf_digest_clean 00:20:11.627 ************************************ 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80525 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80525 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80525 ']' 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.627 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:11.884 [2024-11-20 08:53:42.560348] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:11.884 [2024-11-20 08:53:42.561186] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.884 [2024-11-20 08:53:42.718092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.142 [2024-11-20 08:53:42.800840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.142 [2024-11-20 08:53:42.800920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.142 [2024-11-20 08:53:42.800934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.142 [2024-11-20 08:53:42.800946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.142 [2024-11-20 08:53:42.800955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.142 [2024-11-20 08:53:42.801474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.142 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.142 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:12.142 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.142 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.142 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:12.142 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.142 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:12.142 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:12.142 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:12.142 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.142 08:53:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:12.142 [2024-11-20 08:53:42.976327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:12.142 null0 00:20:12.142 [2024-11-20 08:53:43.041483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.400 [2024-11-20 08:53:43.065683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80545 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80545 /var/tmp/bperf.sock 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80545 ']' 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.400 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:12.400 [2024-11-20 08:53:43.136245] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:12.400 [2024-11-20 08:53:43.136407] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80545 ] 00:20:12.400 [2024-11-20 08:53:43.288495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.658 [2024-11-20 08:53:43.378638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.658 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.658 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:12.658 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:12.659 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:12.659 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:12.916 [2024-11-20 08:53:43.788529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:13.174 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:13.174 08:53:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:13.432 nvme0n1 00:20:13.432 08:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:13.432 08:53:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:13.692 Running I/O for 2 seconds... 00:20:15.622 14351.00 IOPS, 56.06 MiB/s [2024-11-20T08:53:46.537Z] 14732.00 IOPS, 57.55 MiB/s 00:20:15.622 Latency(us) 00:20:15.622 [2024-11-20T08:53:46.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.622 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:15.622 nvme0n1 : 2.01 14748.45 57.61 0.00 0.00 8672.14 7685.59 20852.36 00:20:15.622 [2024-11-20T08:53:46.537Z] =================================================================================================================== 00:20:15.622 [2024-11-20T08:53:46.537Z] Total : 14748.45 57.61 0.00 0.00 8672.14 7685.59 20852.36 00:20:15.622 { 00:20:15.622 "results": [ 00:20:15.622 { 00:20:15.622 "job": "nvme0n1", 00:20:15.622 "core_mask": "0x2", 00:20:15.622 "workload": "randread", 00:20:15.622 "status": "finished", 00:20:15.622 "queue_depth": 128, 00:20:15.622 "io_size": 4096, 00:20:15.622 "runtime": 2.006448, 00:20:15.622 "iops": 14748.450993995359, 00:20:15.622 "mibps": 57.61113669529437, 00:20:15.622 "io_failed": 0, 00:20:15.622 "io_timeout": 0, 00:20:15.622 "avg_latency_us": 8672.137692250977, 00:20:15.622 "min_latency_us": 7685.585454545455, 00:20:15.622 "max_latency_us": 20852.363636363636 00:20:15.622 } 00:20:15.622 ], 00:20:15.622 "core_count": 1 00:20:15.622 } 00:20:15.622 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:15.622 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:15.622 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:15.622 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:15.622 | select(.opcode=="crc32c") 00:20:15.622 | "\(.module_name) \(.executed)"' 00:20:15.622 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:15.880 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:15.880 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:15.880 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:15.880 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:15.880 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80545 00:20:15.880 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80545 ']' 00:20:15.880 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80545 00:20:15.880 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:15.880 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.880 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80545 00:20:15.880 killing process with pid 80545 00:20:15.880 Received shutdown signal, test time was about 2.000000 seconds 00:20:15.880 00:20:15.880 Latency(us) 00:20:15.880 [2024-11-20T08:53:46.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.880 [2024-11-20T08:53:46.795Z] =================================================================================================================== 00:20:15.880 [2024-11-20T08:53:46.795Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.880 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:15.881 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:15.881 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80545' 00:20:15.881 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80545 00:20:15.881 08:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80545 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80598 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80598 /var/tmp/bperf.sock 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80598 ']' 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:16.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.139 08:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:16.398 [2024-11-20 08:53:47.100294] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:16.398 [2024-11-20 08:53:47.100814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:20:16.398 Zero copy mechanism will not be used. 00:20:16.398 llocations --file-prefix=spdk_pid80598 ] 00:20:16.398 [2024-11-20 08:53:47.263398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.656 [2024-11-20 08:53:47.342375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.224 08:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.224 08:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:17.224 08:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:17.224 08:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:17.224 08:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:17.792 [2024-11-20 08:53:48.425612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:17.792 08:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:17.792 08:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:18.051 nvme0n1 00:20:18.052 08:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:18.052 08:53:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:18.052 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:18.052 Zero copy mechanism will not be used. 00:20:18.052 Running I/O for 2 seconds... 00:20:20.048 7536.00 IOPS, 942.00 MiB/s [2024-11-20T08:53:50.963Z] 7648.00 IOPS, 956.00 MiB/s 00:20:20.048 Latency(us) 00:20:20.048 [2024-11-20T08:53:50.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.048 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:20.048 nvme0n1 : 2.00 7647.14 955.89 0.00 0.00 2088.97 1832.03 6762.12 00:20:20.048 [2024-11-20T08:53:50.963Z] =================================================================================================================== 00:20:20.048 [2024-11-20T08:53:50.963Z] Total : 7647.14 955.89 0.00 0.00 2088.97 1832.03 6762.12 00:20:20.048 { 00:20:20.048 "results": [ 00:20:20.048 { 00:20:20.048 "job": "nvme0n1", 00:20:20.048 "core_mask": "0x2", 00:20:20.048 "workload": "randread", 00:20:20.048 "status": "finished", 00:20:20.048 "queue_depth": 16, 00:20:20.048 "io_size": 131072, 00:20:20.048 "runtime": 2.002316, 00:20:20.048 "iops": 7647.14460654562, 00:20:20.048 "mibps": 955.8930758182025, 00:20:20.048 "io_failed": 0, 00:20:20.048 "io_timeout": 0, 00:20:20.048 "avg_latency_us": 2088.971672841265, 00:20:20.048 "min_latency_us": 1832.0290909090909, 00:20:20.048 "max_latency_us": 6762.123636363636 00:20:20.048 } 00:20:20.048 ], 00:20:20.048 "core_count": 1 00:20:20.048 } 00:20:20.307 08:53:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:20.307 08:53:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:20.307 08:53:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:20.307 08:53:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:20.307 | select(.opcode=="crc32c") 00:20:20.307 | "\(.module_name) \(.executed)"' 00:20:20.307 08:53:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:20.566 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:20.566 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:20.566 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:20.566 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:20.566 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80598 00:20:20.566 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80598 ']' 00:20:20.566 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80598 00:20:20.567 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:20.567 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.567 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80598 00:20:20.567 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:20.567 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:20.567 killing process with pid 80598 00:20:20.567 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80598' 00:20:20.567 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80598 00:20:20.567 Received shutdown signal, test time was about 2.000000 seconds 00:20:20.567 00:20:20.567 Latency(us) 00:20:20.567 [2024-11-20T08:53:51.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.567 [2024-11-20T08:53:51.482Z] =================================================================================================================== 00:20:20.567 [2024-11-20T08:53:51.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.567 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80598 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80664 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80664 /var/tmp/bperf.sock 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80664 ']' 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.825 08:53:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:20.825 [2024-11-20 08:53:51.636296] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:20.825 [2024-11-20 08:53:51.636423] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80664 ] 00:20:21.085 [2024-11-20 08:53:51.783792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.085 [2024-11-20 08:53:51.859820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.022 08:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.022 08:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:22.022 08:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:22.022 08:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:22.022 08:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:22.280 [2024-11-20 08:53:53.055410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:22.280 08:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:22.281 08:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:22.848 nvme0n1 00:20:22.848 08:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:22.848 08:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:22.848 Running I/O for 2 seconds... 00:20:24.720 15749.00 IOPS, 61.52 MiB/s [2024-11-20T08:53:55.635Z] 15621.50 IOPS, 61.02 MiB/s 00:20:24.720 Latency(us) 00:20:24.720 [2024-11-20T08:53:55.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.720 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:24.720 nvme0n1 : 2.01 15614.88 61.00 0.00 0.00 8189.90 7417.48 16681.89 00:20:24.720 [2024-11-20T08:53:55.635Z] =================================================================================================================== 00:20:24.720 [2024-11-20T08:53:55.635Z] Total : 15614.88 61.00 0.00 0.00 8189.90 7417.48 16681.89 00:20:24.720 { 00:20:24.720 "results": [ 00:20:24.720 { 00:20:24.720 "job": "nvme0n1", 00:20:24.720 "core_mask": "0x2", 00:20:24.720 "workload": "randwrite", 00:20:24.720 "status": "finished", 00:20:24.720 "queue_depth": 128, 00:20:24.720 "io_size": 4096, 00:20:24.720 "runtime": 2.009045, 00:20:24.721 "iops": 15614.881697522953, 00:20:24.721 "mibps": 60.995631630949035, 00:20:24.721 "io_failed": 0, 00:20:24.721 "io_timeout": 0, 00:20:24.721 "avg_latency_us": 8189.902865472164, 00:20:24.721 "min_latency_us": 7417.483636363636, 00:20:24.721 "max_latency_us": 16681.890909090907 00:20:24.721 } 00:20:24.721 ], 00:20:24.721 "core_count": 1 00:20:24.721 } 00:20:24.995 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:24.995 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:24.995 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:24.995 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:24.995 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:24.995 | select(.opcode=="crc32c") 00:20:24.995 | "\(.module_name) \(.executed)"' 00:20:25.270 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:25.270 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:25.270 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:25.270 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:25.270 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80664 00:20:25.270 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80664 ']' 00:20:25.270 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80664 00:20:25.270 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:25.270 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.270 08:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80664 00:20:25.270 killing process with pid 80664 00:20:25.270 Received shutdown signal, test time was about 2.000000 seconds 00:20:25.270 00:20:25.270 Latency(us) 00:20:25.270 [2024-11-20T08:53:56.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.270 [2024-11-20T08:53:56.185Z] =================================================================================================================== 00:20:25.270 [2024-11-20T08:53:56.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.270 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:25.270 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:25.270 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80664' 00:20:25.270 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80664 00:20:25.270 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80664 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80725 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80725 /var/tmp/bperf.sock 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80725 ']' 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:25.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.529 08:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:25.529 [2024-11-20 08:53:56.320041] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:25.529 [2024-11-20 08:53:56.320146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:20:25.529 Zero copy mechanism will not be used. 00:20:25.529 llocations --file-prefix=spdk_pid80725 ] 00:20:25.788 [2024-11-20 08:53:56.461537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.788 [2024-11-20 08:53:56.537766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.724 08:53:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.724 08:53:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:26.724 08:53:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:26.724 08:53:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:26.724 08:53:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:26.983 [2024-11-20 08:53:57.701334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:26.983 08:53:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:26.983 08:53:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:27.242 nvme0n1 00:20:27.242 08:53:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:27.242 08:53:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:27.500 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:27.500 Zero copy mechanism will not be used. 00:20:27.500 Running I/O for 2 seconds... 00:20:29.378 6305.00 IOPS, 788.12 MiB/s [2024-11-20T08:54:00.293Z] 6385.00 IOPS, 798.12 MiB/s 00:20:29.378 Latency(us) 00:20:29.378 [2024-11-20T08:54:00.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.378 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:29.378 nvme0n1 : 2.00 6383.31 797.91 0.00 0.00 2501.06 1645.85 4230.05 00:20:29.378 [2024-11-20T08:54:00.293Z] =================================================================================================================== 00:20:29.378 [2024-11-20T08:54:00.293Z] Total : 6383.31 797.91 0.00 0.00 2501.06 1645.85 4230.05 00:20:29.378 { 00:20:29.378 "results": [ 00:20:29.378 { 00:20:29.378 "job": "nvme0n1", 00:20:29.378 "core_mask": "0x2", 00:20:29.378 "workload": "randwrite", 00:20:29.378 "status": "finished", 00:20:29.378 "queue_depth": 16, 00:20:29.378 "io_size": 131072, 00:20:29.378 "runtime": 2.003036, 00:20:29.378 "iops": 6383.310135214744, 00:20:29.378 "mibps": 797.913766901843, 00:20:29.378 "io_failed": 0, 00:20:29.378 "io_timeout": 0, 00:20:29.378 "avg_latency_us": 2501.061645265418, 00:20:29.378 "min_latency_us": 1645.8472727272726, 00:20:29.378 "max_latency_us": 4230.050909090909 00:20:29.378 } 00:20:29.378 ], 00:20:29.378 "core_count": 1 00:20:29.378 } 00:20:29.378 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:29.378 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:29.378 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:29.378 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:29.378 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:29.378 | select(.opcode=="crc32c") 00:20:29.378 | "\(.module_name) \(.executed)"' 00:20:29.637 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:29.637 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:29.637 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:29.637 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:29.637 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80725 00:20:29.637 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80725 ']' 00:20:29.637 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80725 00:20:29.637 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:29.637 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.637 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80725 00:20:29.896 killing process with pid 80725 00:20:29.896 Received shutdown signal, test time was about 2.000000 seconds 00:20:29.896 00:20:29.896 Latency(us) 00:20:29.896 [2024-11-20T08:54:00.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.896 [2024-11-20T08:54:00.811Z] =================================================================================================================== 00:20:29.896 [2024-11-20T08:54:00.811Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.896 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:29.896 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:29.896 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80725' 00:20:29.896 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80725 00:20:29.896 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80725 00:20:29.896 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80525 00:20:29.896 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80525 ']' 00:20:29.896 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80525 00:20:30.155 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:30.155 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.155 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80525 00:20:30.155 killing process with pid 80525 00:20:30.155 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.155 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.155 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80525' 00:20:30.155 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80525 00:20:30.155 08:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80525 00:20:30.427 ************************************ 00:20:30.427 END TEST nvmf_digest_clean 00:20:30.427 ************************************ 00:20:30.427 00:20:30.427 real 0m18.618s 00:20:30.427 user 0m36.993s 00:20:30.427 sys 0m4.880s 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:30.427 ************************************ 00:20:30.427 START TEST nvmf_digest_error 00:20:30.427 ************************************ 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80814 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80814 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80814 ']' 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.427 08:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:30.427 [2024-11-20 08:54:01.235530] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:30.427 [2024-11-20 08:54:01.235643] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.686 [2024-11-20 08:54:01.385152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.686 [2024-11-20 08:54:01.451439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.686 [2024-11-20 08:54:01.451516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.686 [2024-11-20 08:54:01.451530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.686 [2024-11-20 08:54:01.451539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.686 [2024-11-20 08:54:01.451547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.686 [2024-11-20 08:54:01.451994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:31.623 [2024-11-20 08:54:02.308597] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:31.623 [2024-11-20 08:54:02.389262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:31.623 null0 00:20:31.623 [2024-11-20 08:54:02.453309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.623 [2024-11-20 08:54:02.477500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80846 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80846 /var/tmp/bperf.sock 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80846 ']' 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.623 08:54:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:31.623 [2024-11-20 08:54:02.534771] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:31.623 [2024-11-20 08:54:02.534881] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80846 ] 00:20:31.882 [2024-11-20 08:54:02.683931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.882 [2024-11-20 08:54:02.756655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.140 [2024-11-20 08:54:02.831096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:32.704 08:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.704 08:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:32.704 08:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:32.704 08:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:33.269 08:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:33.269 08:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.269 08:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:33.269 08:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.269 08:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:33.269 08:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:33.527 nvme0n1 00:20:33.528 08:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:33.528 08:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.528 08:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:33.528 08:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.528 08:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:33.528 08:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:33.528 Running I/O for 2 seconds... 00:20:33.528 [2024-11-20 08:54:04.439775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.528 [2024-11-20 08:54:04.439870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.528 [2024-11-20 08:54:04.439888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.786 [2024-11-20 08:54:04.457662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.786 [2024-11-20 08:54:04.457742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.786 [2024-11-20 08:54:04.457758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.786 [2024-11-20 08:54:04.475211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.786 [2024-11-20 08:54:04.475279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.786 [2024-11-20 08:54:04.475294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.786 [2024-11-20 08:54:04.492530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.786 [2024-11-20 08:54:04.492620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.786 [2024-11-20 08:54:04.492636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.786 [2024-11-20 08:54:04.510090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.786 [2024-11-20 08:54:04.510164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.786 [2024-11-20 08:54:04.510179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.786 [2024-11-20 08:54:04.527542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.786 [2024-11-20 08:54:04.527607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.786 [2024-11-20 08:54:04.527622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.786 [2024-11-20 08:54:04.545136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.786 [2024-11-20 08:54:04.545202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.786 [2024-11-20 08:54:04.545217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.786 [2024-11-20 08:54:04.562729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.786 [2024-11-20 08:54:04.562800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.786 [2024-11-20 08:54:04.562825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.786 [2024-11-20 08:54:04.579937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.786 [2024-11-20 08:54:04.580001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.786 [2024-11-20 08:54:04.580015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.786 [2024-11-20 08:54:04.597044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.786 [2024-11-20 08:54:04.597116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.786 [2024-11-20 08:54:04.597131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.786 [2024-11-20 08:54:04.614207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.786 [2024-11-20 08:54:04.614277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.786 [2024-11-20 08:54:04.614291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.786 [2024-11-20 08:54:04.631496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.787 [2024-11-20 08:54:04.631551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.787 [2024-11-20 08:54:04.631566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.787 [2024-11-20 08:54:04.648708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.787 [2024-11-20 08:54:04.648753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.787 [2024-11-20 08:54:04.648777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.787 [2024-11-20 08:54:04.667047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.787 [2024-11-20 08:54:04.667122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.787 [2024-11-20 08:54:04.667137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.787 [2024-11-20 08:54:04.684931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:33.787 [2024-11-20 08:54:04.685014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.787 [2024-11-20 08:54:04.685030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.702203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.702273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.702287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.719186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.719245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.719259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.736295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.736365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.736379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.753313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.753386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.753400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.770349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.770414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.770428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.788407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.788490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.788506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.806452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.806531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.806547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.823960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.824028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.824043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.841299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.841361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.841377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.858970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.859053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.859070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.877016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.877109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.877126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.894619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.894693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.894708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.911921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.911969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.911983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.929566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.045 [2024-11-20 08:54:04.929623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.045 [2024-11-20 08:54:04.929637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.045 [2024-11-20 08:54:04.946737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.046 [2024-11-20 08:54:04.946791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.046 [2024-11-20 08:54:04.946805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.304 [2024-11-20 08:54:04.963987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.304 [2024-11-20 08:54:04.964038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.304 [2024-11-20 08:54:04.964051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.304 [2024-11-20 08:54:04.981089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.304 [2024-11-20 08:54:04.981140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.304 [2024-11-20 08:54:04.981152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.304 [2024-11-20 08:54:04.998381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.304 [2024-11-20 08:54:04.998465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.304 [2024-11-20 08:54:04.998486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.304 [2024-11-20 08:54:05.015830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.304 [2024-11-20 08:54:05.015953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.304 [2024-11-20 08:54:05.015968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.304 [2024-11-20 08:54:05.032737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.304 [2024-11-20 08:54:05.032773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.304 [2024-11-20 08:54:05.032786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.304 [2024-11-20 08:54:05.049528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.304 [2024-11-20 08:54:05.049579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.304 [2024-11-20 08:54:05.049592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.304 [2024-11-20 08:54:05.066294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.304 [2024-11-20 08:54:05.066345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.304 [2024-11-20 08:54:05.066358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.304 [2024-11-20 08:54:05.083000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.304 [2024-11-20 08:54:05.083036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.304 [2024-11-20 08:54:05.083048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.304 [2024-11-20 08:54:05.099765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.304 [2024-11-20 08:54:05.099827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.304 [2024-11-20 08:54:05.099841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.304 [2024-11-20 08:54:05.116589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.304 [2024-11-20 08:54:05.116627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.304 [2024-11-20 08:54:05.116639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.304 [2024-11-20 08:54:05.133512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.304 [2024-11-20 08:54:05.133552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.305 [2024-11-20 08:54:05.133564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.305 [2024-11-20 08:54:05.150944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.305 [2024-11-20 08:54:05.150979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.305 [2024-11-20 08:54:05.150992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.305 [2024-11-20 08:54:05.168278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.305 [2024-11-20 08:54:05.168329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.305 [2024-11-20 08:54:05.168342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.305 [2024-11-20 08:54:05.185564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.305 [2024-11-20 08:54:05.185615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.305 [2024-11-20 08:54:05.185627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.305 [2024-11-20 08:54:05.202451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.305 [2024-11-20 08:54:05.202501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.305 [2024-11-20 08:54:05.202514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.219432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.219483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.219496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.236810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.236857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.236869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.254124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.254159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.254172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.271536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.271587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.271615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.289212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.289246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.289260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.306500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.306549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.306562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.323669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.323704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.323716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.340900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.340935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.340947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.358156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.358205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.358218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.375333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.375384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.375397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.392585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.392620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.392632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.409575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.409609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.409622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 14548.00 IOPS, 56.83 MiB/s [2024-11-20T08:54:05.479Z] [2024-11-20 08:54:05.426675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.426711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.426723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.443764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.443814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.443828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.564 [2024-11-20 08:54:05.460888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.564 [2024-11-20 08:54:05.460923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.564 [2024-11-20 08:54:05.460936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.477997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.478032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.478045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.495067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.495101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.495114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.512200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.512232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.512244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.536625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.536660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.536674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.553700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.553737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.553750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.570768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.570820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.570833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.587827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.587859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.587871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.604899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.604933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.604946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.621965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.622001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.622013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.638988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.639023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.639036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.656036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.656069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.656081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.673138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.823 [2024-11-20 08:54:05.673170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.823 [2024-11-20 08:54:05.673183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.823 [2024-11-20 08:54:05.690194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.824 [2024-11-20 08:54:05.690228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.824 [2024-11-20 08:54:05.690241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.824 [2024-11-20 08:54:05.707492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.824 [2024-11-20 08:54:05.707531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.824 [2024-11-20 08:54:05.707544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.824 [2024-11-20 08:54:05.724939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:34.824 [2024-11-20 08:54:05.724973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.824 [2024-11-20 08:54:05.724986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.742109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.742145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.742159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.759232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.759269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.759282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.776609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.776644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.776657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.794138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.794202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.794216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.811496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.811555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.811568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.828977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.829024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.829039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.846673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.846743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.846757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.864251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.864326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.864342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.882240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.882315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.882331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.899706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.899777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.899793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.917374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.917454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.917469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.934978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.935060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.935075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.952812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.952880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.952894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.970422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.970496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.970511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.083 [2024-11-20 08:54:05.988503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.083 [2024-11-20 08:54:05.988626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.083 [2024-11-20 08:54:05.988643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.006598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.006672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.006687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.024057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.024129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.024145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.042108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.042196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.042211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.059553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.059613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.059628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.076981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.077058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.077074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.094562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.094625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.094639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.112393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.112476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.112492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.129953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.130026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.130041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.147573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.147646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.147661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.165160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.165233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.165248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.182729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.182789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.182816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.200674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.200747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.200763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.218297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.218359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.218374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.236088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.236164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.236179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.343 [2024-11-20 08:54:06.253682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.343 [2024-11-20 08:54:06.253753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.343 [2024-11-20 08:54:06.253769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.601 [2024-11-20 08:54:06.271520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.602 [2024-11-20 08:54:06.271605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.602 [2024-11-20 08:54:06.271620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.602 [2024-11-20 08:54:06.289361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.602 [2024-11-20 08:54:06.289441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.602 [2024-11-20 08:54:06.289457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.602 [2024-11-20 08:54:06.307092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.602 [2024-11-20 08:54:06.307155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.602 [2024-11-20 08:54:06.307170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.602 [2024-11-20 08:54:06.332540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.602 [2024-11-20 08:54:06.332687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.602 [2024-11-20 08:54:06.332716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.602 [2024-11-20 08:54:06.360520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.602 [2024-11-20 08:54:06.360650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.602 [2024-11-20 08:54:06.360676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.602 [2024-11-20 08:54:06.387554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.602 [2024-11-20 08:54:06.387659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.602 [2024-11-20 08:54:06.387686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.602 [2024-11-20 08:54:06.417462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14512c0) 00:20:35.602 [2024-11-20 08:54:06.417588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.602 [2024-11-20 08:54:06.417618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.602 14232.00 IOPS, 55.59 MiB/s 00:20:35.602 Latency(us) 00:20:35.602 [2024-11-20T08:54:06.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.602 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:35.602 nvme0n1 : 2.02 14222.20 55.56 0.00 0.00 8991.20 8221.79 41704.73 00:20:35.602 [2024-11-20T08:54:06.517Z] =================================================================================================================== 00:20:35.602 [2024-11-20T08:54:06.517Z] Total : 14222.20 55.56 0.00 0.00 8991.20 8221.79 41704.73 00:20:35.602 { 00:20:35.602 "results": [ 00:20:35.602 { 00:20:35.602 "job": "nvme0n1", 00:20:35.602 "core_mask": "0x2", 00:20:35.602 "workload": "randread", 00:20:35.602 "status": "finished", 00:20:35.602 "queue_depth": 128, 00:20:35.602 "io_size": 4096, 00:20:35.602 "runtime": 2.019237, 00:20:35.602 "iops": 14222.203733390385, 00:20:35.602 "mibps": 55.55548333355619, 00:20:35.602 "io_failed": 0, 00:20:35.602 "io_timeout": 0, 00:20:35.602 "avg_latency_us": 8991.196767564215, 00:20:35.602 "min_latency_us": 8221.789090909091, 00:20:35.602 "max_latency_us": 41704.72727272727 00:20:35.602 } 00:20:35.602 ], 00:20:35.602 "core_count": 1 00:20:35.602 } 00:20:35.602 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:35.602 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:35.602 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:35.602 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:35.602 | .driver_specific 00:20:35.602 | .nvme_error 00:20:35.602 | .status_code 00:20:35.602 | .command_transient_transport_error' 00:20:35.860 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:20:35.860 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80846 00:20:35.860 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80846 ']' 00:20:35.860 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80846 00:20:35.860 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:35.860 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.860 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80846 00:20:36.120 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:36.120 killing process with pid 80846 00:20:36.120 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:36.120 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80846' 00:20:36.120 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80846 00:20:36.120 Received shutdown signal, test time was about 2.000000 seconds 00:20:36.120 00:20:36.120 Latency(us) 00:20:36.120 [2024-11-20T08:54:07.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.120 [2024-11-20T08:54:07.035Z] =================================================================================================================== 00:20:36.120 [2024-11-20T08:54:07.035Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.120 08:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80846 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80912 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80912 /var/tmp/bperf.sock 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80912 ']' 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.380 08:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:36.380 [2024-11-20 08:54:07.107916] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:36.380 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:36.380 Zero copy mechanism will not be used. 00:20:36.380 [2024-11-20 08:54:07.108033] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80912 ] 00:20:36.380 [2024-11-20 08:54:07.259925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.639 [2024-11-20 08:54:07.337663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.639 [2024-11-20 08:54:07.409460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:37.575 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.575 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:37.575 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:37.575 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:37.575 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:37.575 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.575 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:37.575 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.575 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:37.575 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:38.143 nvme0n1 00:20:38.143 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:38.143 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.143 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.143 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.143 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:38.143 08:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:38.143 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:38.143 Zero copy mechanism will not be used. 00:20:38.143 Running I/O for 2 seconds... 00:20:38.143 [2024-11-20 08:54:08.930326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.930596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.930748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.935350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.935552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.935571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.940175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.940216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.940230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.944509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.944547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.944587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.948848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.948886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.948911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.953479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.953518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.953532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.957875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.957911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.957924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.962181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.962221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.962236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.966481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.966520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.966540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.970971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.971008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.971022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.975299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.975337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.975351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.979634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.979671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.979685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.984064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.984101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.984132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.988489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.988527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.988541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.992966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.993003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.993017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:08.997209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:08.997247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:08.997261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:09.001570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:09.001608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:09.001623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:09.006183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:09.006219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:09.006233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:09.010480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:09.010518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:09.010532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:09.014914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:09.014951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:09.014964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:09.019267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:09.019315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:09.019338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.143 [2024-11-20 08:54:09.023547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.143 [2024-11-20 08:54:09.023585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.143 [2024-11-20 08:54:09.023598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.144 [2024-11-20 08:54:09.028056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.144 [2024-11-20 08:54:09.028094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.144 [2024-11-20 08:54:09.028108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.144 [2024-11-20 08:54:09.032435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.144 [2024-11-20 08:54:09.032475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.144 [2024-11-20 08:54:09.032490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.144 [2024-11-20 08:54:09.036826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.144 [2024-11-20 08:54:09.036866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.144 [2024-11-20 08:54:09.036881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.144 [2024-11-20 08:54:09.041498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.144 [2024-11-20 08:54:09.041538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.144 [2024-11-20 08:54:09.041552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.144 [2024-11-20 08:54:09.045836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.144 [2024-11-20 08:54:09.045887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.144 [2024-11-20 08:54:09.045930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.144 [2024-11-20 08:54:09.050184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.144 [2024-11-20 08:54:09.050508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.144 [2024-11-20 08:54:09.050526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.144 [2024-11-20 08:54:09.055005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.144 [2024-11-20 08:54:09.055057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.144 [2024-11-20 08:54:09.055072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.059344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.059396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.059410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.063887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.063927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.063941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.068349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.068416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.068430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.072738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.072778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.072792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.077091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.077129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.077143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.081223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.081262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.081277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.085626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.085665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.085679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.090113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.090168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.090184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.094617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.094670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.094686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.099018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.099075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.099091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.103372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.103428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.103444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.107966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.108028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.108044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.112483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.112546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.112590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.116992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.117041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.117056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.121352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.121397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.121412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.125693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.125737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.125752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.130134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.130175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.130190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.134429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.134478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.134492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.138753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.138791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.138817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.142985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.143023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.143036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.147195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.147237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.147250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.151457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.151495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.151508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.155702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.155741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.155755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.159991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.160029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.160043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.164174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.164212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.164226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.168513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.168548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.168569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.172987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.173024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.173037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.177019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.177056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.177070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.181088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.181124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.181137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.185183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.185218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.185232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.189188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.189224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.189236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.193467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.193515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.193528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.197456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.197490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.197503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.201558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.201606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.201618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.205666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.205713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.205726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.209762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.209822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.209837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.214037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.402 [2024-11-20 08:54:09.214092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.402 [2024-11-20 08:54:09.214104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.402 [2024-11-20 08:54:09.218373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.218427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.218440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.222717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.222786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.222814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.226997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.227065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.227079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.231712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.231789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.231834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.236255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.236316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.236330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.240773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.240828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.240843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.245123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.245161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.245174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.249470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.249505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.249518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.253894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.253944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.253957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.258435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.258470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.258483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.263050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.263087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.263101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.267410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.267445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.267458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.271839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.271883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.271897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.276288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.276330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.276344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.280876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.280914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.280929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.285369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.285408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.285423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.289716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.289754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.289768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.294136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.294173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.294186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.298421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.298458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.298471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.302824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.302893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.302917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.307420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.307619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.307637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.403 [2024-11-20 08:54:09.312017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.403 [2024-11-20 08:54:09.312079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.403 [2024-11-20 08:54:09.312094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.316489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.316524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.316537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.321149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.321199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.321213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.325532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.325567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.325580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.330113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.330161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.330179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.334530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.334566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.334579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.339261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.339300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.339314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.343757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.343819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.343850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.348253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.348293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.348307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.352834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.352873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.352888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.357439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.357475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.357488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.361820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.361869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.361906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.366356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.366396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.366410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.370838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.370911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.370928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.375325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.375366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.375380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.379842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.379893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.379924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.384348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.384549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.384591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.389061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.389109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.389124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.393546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.393583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.393596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.398152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.398190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.398213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.402892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.402930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.402943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.407394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.407460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.407473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.411959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.663 [2024-11-20 08:54:09.411994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.663 [2024-11-20 08:54:09.412013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.663 [2024-11-20 08:54:09.416607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.416646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.416660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.421251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.421290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.421305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.425955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.425990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.426003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.430674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.430710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.430724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.435254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.435293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.435310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.439692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.439731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.439750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.444059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.444097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.444110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.448412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.448448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.448462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.452910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.452955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.452968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.457437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.457476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.457491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.461887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.461935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.461950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.466126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.466163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.466177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.470446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.470490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.470503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.474832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.474874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.474889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.479127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.479193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.479209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.483601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.483642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.483656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.488170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.488449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.488482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.492896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.492935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.492950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.497203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.497240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.497253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.501464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.501498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.501510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.505564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.505617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.505631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.509878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.509919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.509932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.514326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.514364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.514378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.518723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.518761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.518775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.523122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.523161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.523176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.527576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.527611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.664 [2024-11-20 08:54:09.527624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.664 [2024-11-20 08:54:09.532126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.664 [2024-11-20 08:54:09.532162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.665 [2024-11-20 08:54:09.532175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.665 [2024-11-20 08:54:09.536735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.665 [2024-11-20 08:54:09.536775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.665 [2024-11-20 08:54:09.536789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.665 [2024-11-20 08:54:09.541252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.665 [2024-11-20 08:54:09.541290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.665 [2024-11-20 08:54:09.541304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.665 [2024-11-20 08:54:09.545738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.665 [2024-11-20 08:54:09.545774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.665 [2024-11-20 08:54:09.545786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.665 [2024-11-20 08:54:09.550458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.665 [2024-11-20 08:54:09.550649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.665 [2024-11-20 08:54:09.550671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.665 [2024-11-20 08:54:09.555043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.665 [2024-11-20 08:54:09.555098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.665 [2024-11-20 08:54:09.555113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.665 [2024-11-20 08:54:09.559560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.665 [2024-11-20 08:54:09.559595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.665 [2024-11-20 08:54:09.559608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.665 [2024-11-20 08:54:09.564202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.665 [2024-11-20 08:54:09.564240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.665 [2024-11-20 08:54:09.564259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.665 [2024-11-20 08:54:09.568777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.665 [2024-11-20 08:54:09.568822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.665 [2024-11-20 08:54:09.568836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.665 [2024-11-20 08:54:09.573474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.665 [2024-11-20 08:54:09.573506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.665 [2024-11-20 08:54:09.573517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.924 [2024-11-20 08:54:09.578140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.924 [2024-11-20 08:54:09.578173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.924 [2024-11-20 08:54:09.578187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.924 [2024-11-20 08:54:09.582691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.924 [2024-11-20 08:54:09.582722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.924 [2024-11-20 08:54:09.582733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.924 [2024-11-20 08:54:09.587202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.924 [2024-11-20 08:54:09.587236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.924 [2024-11-20 08:54:09.587249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.924 [2024-11-20 08:54:09.591713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.924 [2024-11-20 08:54:09.591747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.924 [2024-11-20 08:54:09.591760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.596285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.596322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.596335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.600994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.601041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.601054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.605709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.605757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.605770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.610418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.610479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.610490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.614982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.615012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.615039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.619457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.619487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.619499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.623760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.623790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.623813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.628109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.628140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.628152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.632310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.632343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.632355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.636634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.636668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.636682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.640988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.641018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.641040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.645368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.645460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.645471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.649808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.649853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.649865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.654143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.654175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.654187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.658365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.658411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.658423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.662416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.662446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.662457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.666528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.666560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.666572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.670685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.670715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.670726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.674751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.674783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.674795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.679043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.679093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.679106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.683673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.683709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.683722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.688357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.688436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.688449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.693003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.693064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.693077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.697525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.697555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.697567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.702038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.702103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.702116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.706488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.925 [2024-11-20 08:54:09.706519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.925 [2024-11-20 08:54:09.706530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.925 [2024-11-20 08:54:09.710836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.710878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.710890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.715021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.715084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.715097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.719233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.719269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.719282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.723576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.723607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.723619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.727789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.727832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.727844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.731625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.731656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.731667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.735665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.735697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.735709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.739557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.739588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.739599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.743546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.743577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.743589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.747868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.747915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.747942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.752397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.752429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.752441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.756906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.756954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.756966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.761705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.761738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.761752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.766186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.766250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.766263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.770781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.770807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.770820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.775380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.775426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.775438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.780017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.780065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.780079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.784760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.784794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.784821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.789861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.789946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.789959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.794987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.795038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.795052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.799708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.799742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.799755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.804255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.804289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.804302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.808668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.808712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.808726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.813205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.813241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.813255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.817812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.817910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.817924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.823238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.823270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.823284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.828105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.926 [2024-11-20 08:54:09.828138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.926 [2024-11-20 08:54:09.828151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.926 [2024-11-20 08:54:09.832822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:38.927 [2024-11-20 08:54:09.832856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.927 [2024-11-20 08:54:09.832869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.186 [2024-11-20 08:54:09.837300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.186 [2024-11-20 08:54:09.837334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.186 [2024-11-20 08:54:09.837347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.186 [2024-11-20 08:54:09.842262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.186 [2024-11-20 08:54:09.842294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.186 [2024-11-20 08:54:09.842307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.186 [2024-11-20 08:54:09.847358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.186 [2024-11-20 08:54:09.847420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.186 [2024-11-20 08:54:09.847432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.186 [2024-11-20 08:54:09.852197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.186 [2024-11-20 08:54:09.852230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.186 [2024-11-20 08:54:09.852243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.186 [2024-11-20 08:54:09.857105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.186 [2024-11-20 08:54:09.857140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.186 [2024-11-20 08:54:09.857153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.186 [2024-11-20 08:54:09.861897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.186 [2024-11-20 08:54:09.861971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.186 [2024-11-20 08:54:09.861984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.186 [2024-11-20 08:54:09.867063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.186 [2024-11-20 08:54:09.867096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.186 [2024-11-20 08:54:09.867110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.186 [2024-11-20 08:54:09.872232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.186 [2024-11-20 08:54:09.872266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.186 [2024-11-20 08:54:09.872279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.186 [2024-11-20 08:54:09.877141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.877185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.877199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.883594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.883627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.883640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.890125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.890158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.890180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.896049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.896081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.896094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.901806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.901845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.901872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.907128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.907159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.907179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.912546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.912600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.912624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.918232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.918263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.918276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.187 6820.00 IOPS, 852.50 MiB/s [2024-11-20T08:54:10.102Z] [2024-11-20 08:54:09.925826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.925868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.925880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.931725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.931755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.931766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.937870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.937910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.937922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.943896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.943925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.943936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.949980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.950021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.950050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.955811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.955849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.955861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.961513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.961558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.961569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.966409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.966455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.966466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.970858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.970899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.970912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.975466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.975499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.975512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.979940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.979970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.979982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.984516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.984572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.984604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.989051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.989085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.989098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.993302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.993335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.993347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:09.997566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:09.997599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:09.997616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:10.001925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:10.001970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:10.001984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:10.006266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:10.006302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:10.006315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:10.010553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:10.010588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:10.010601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.187 [2024-11-20 08:54:10.014886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.187 [2024-11-20 08:54:10.014921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.187 [2024-11-20 08:54:10.014934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.019153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.019187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.019200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.023447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.023480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.023493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.027761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.027794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.027821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.032070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.032104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.032117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.036344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.036377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.036391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.040693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.040730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.040744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.045049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.045083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.045097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.049297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.049332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.049345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.053612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.053668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.053681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.058010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.058045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.058058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.062258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.062292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.062305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.066572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.066606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.066619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.070883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.070916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.070929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.075109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.075142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.075155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.079466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.079499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.079511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.083922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.083955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.083975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.088229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.088263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.088276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.092547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.092606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.092620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.188 [2024-11-20 08:54:10.096856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.188 [2024-11-20 08:54:10.096888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.188 [2024-11-20 08:54:10.096901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.101145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.101178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.101191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.105426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.105463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.105476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.109766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.109824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.109838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.114182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.114229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.114241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.118515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.118556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.118569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.122824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.122857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.122869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.127114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.127156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.127178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.131447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.131479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.131493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.135577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.135609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.135622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.139896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.139928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.139942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.144161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.144193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.144207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.148540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.148588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.148612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.152905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.152937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.152950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.157159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.157193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.157206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.161419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.161461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.161474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.165694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.165737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.165750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.170016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.170050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.170063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.174278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.174311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.174324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.178576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.178613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.178626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.182906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.182938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.182951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.187251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.187284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.187298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.191566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.191599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.449 [2024-11-20 08:54:10.191613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.449 [2024-11-20 08:54:10.195922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.449 [2024-11-20 08:54:10.195956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.195969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.200220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.200253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.200266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.204464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.204496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.204509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.208826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.208864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.208877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.213041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.213083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.213096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.217328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.217362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.217375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.221673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.221707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.221720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.225963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.225997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.226010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.230348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.230382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.230395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.234689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.234723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.234737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.239056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.239090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.239103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.243409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.243452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.243464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.247777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.247821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.247835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.252048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.252081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.252095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.256321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.256365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.256378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.260654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.260687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.260700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.264946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.264989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.265002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.269219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.269251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.269264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.273451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.273484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.273497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.277719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.277751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.277764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.282006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.282039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.282053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.286252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.286286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.286299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.290534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.290567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.290580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.294832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.294863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.294876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.299164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.299198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.299211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.303444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.303477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.303490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.307757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.450 [2024-11-20 08:54:10.307791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.450 [2024-11-20 08:54:10.307819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.450 [2024-11-20 08:54:10.312013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.312044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.312057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.451 [2024-11-20 08:54:10.316361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.316393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.316405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.451 [2024-11-20 08:54:10.320829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.320862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.320875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.451 [2024-11-20 08:54:10.325008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.325055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.325068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.451 [2024-11-20 08:54:10.329571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.329606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.329618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.451 [2024-11-20 08:54:10.333852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.333884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.333897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.451 [2024-11-20 08:54:10.338178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.338238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.338249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.451 [2024-11-20 08:54:10.342555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.342597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.342610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.451 [2024-11-20 08:54:10.347044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.347074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.347085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.451 [2024-11-20 08:54:10.351142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.351190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.351202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.451 [2024-11-20 08:54:10.355277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.355309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.355322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.451 [2024-11-20 08:54:10.359426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.451 [2024-11-20 08:54:10.359458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.451 [2024-11-20 08:54:10.359471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.711 [2024-11-20 08:54:10.363853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.711 [2024-11-20 08:54:10.363910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.711 [2024-11-20 08:54:10.363923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.711 [2024-11-20 08:54:10.368422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.711 [2024-11-20 08:54:10.368471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.711 [2024-11-20 08:54:10.368484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.711 [2024-11-20 08:54:10.372928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.711 [2024-11-20 08:54:10.372962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.711 [2024-11-20 08:54:10.372976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.711 [2024-11-20 08:54:10.377385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.711 [2024-11-20 08:54:10.377420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.711 [2024-11-20 08:54:10.377434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.711 [2024-11-20 08:54:10.381926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.711 [2024-11-20 08:54:10.381991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.711 [2024-11-20 08:54:10.382003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.711 [2024-11-20 08:54:10.386502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.711 [2024-11-20 08:54:10.386551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.711 [2024-11-20 08:54:10.386563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.391007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.391070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.391082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.395343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.395392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.395405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.399426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.399473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.399485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.403549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.403597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.403609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.407668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.407716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.407728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.411929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.411976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.411989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.416290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.416340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.416353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.420773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.420823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.420837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.425277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.425326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.425339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.429798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.429842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.429855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.434074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.434109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.434122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.438525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.438561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.438574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.442772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.442821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.442835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.447161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.447210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.447222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.451623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.451658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.451671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.456089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.456148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.456161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.460494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.460544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.460557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.464984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.465034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.465059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.469450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.469503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.469531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.473837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.473885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.473898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.478167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.478208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.478221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.482348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.482398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.482426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.486605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.486653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.486666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.491017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.491065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.491079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.495431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.495480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.495493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.499756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.499805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.499829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.712 [2024-11-20 08:54:10.504098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.712 [2024-11-20 08:54:10.504147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.712 [2024-11-20 08:54:10.504160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.508516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.508589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.508604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.512774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.512823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.512837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.517083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.517118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.517130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.521400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.521451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.521465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.525887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.525921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.525934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.530220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.530285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.530298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.534583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.534618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.534630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.538870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.538905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.538918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.543101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.543136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.543148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.547306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.547341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.547354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.551662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.551712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.551725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.555978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.556027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.556040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.560341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.560376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.560390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.564754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.564789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.564814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.569148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.569216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.569230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.573455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.573504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.573517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.577888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.577942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.577956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.582254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.582319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.582332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.586594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.586651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.586664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.590935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.590985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.590998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.595277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.595332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.595345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.599659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.599698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.599711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.603973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.604022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.604035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.608346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.608381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.608394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.612726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.612761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.612774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.617044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.617079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.713 [2024-11-20 08:54:10.617092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.713 [2024-11-20 08:54:10.621344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.713 [2024-11-20 08:54:10.621379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.714 [2024-11-20 08:54:10.621392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.625731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.625783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.625797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.630138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.630173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.630186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.634508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.634542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.634555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.638935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.638997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.639010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.643286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.643334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.643347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.647609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.647659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.647672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.651999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.652047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.652060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.656353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.656401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.656414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.660697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.660735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.660749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.665027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.665074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.665087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.669418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.669467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.669479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.673714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.673766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.673779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.678150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.678185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.678200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.682493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.682528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.682541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.686852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.686885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.686897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.691207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.691242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.691255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.695501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.695535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.695548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.699807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.699841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.699853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.704079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.704116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.704129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.708305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.708339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.708352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.712674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.712708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.712722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.716954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.716988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.717000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.721247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.721282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.721295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.725548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.975 [2024-11-20 08:54:10.725581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.975 [2024-11-20 08:54:10.725594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.975 [2024-11-20 08:54:10.729815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.729847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.729861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.734094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.734128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.734140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.738370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.738405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.738418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.742620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.742654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.742666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.746908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.746941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.746954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.751190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.751224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.751239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.755466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.755499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.755512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.759735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.759769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.759783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.763970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.764003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.764016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.768246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.768281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.768293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.772532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.772574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.772589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.776718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.776752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.776764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.781074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.781108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.781121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.785439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.785473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.785486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.789780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.789825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.789838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.794101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.794136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.794148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.798393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.798427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.798440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.802652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.802686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.802699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.806939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.806972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.806985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.811226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.811260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.811273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.815534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.815579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.815592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.819863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.819896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.819909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.824197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.824231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.824245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.828525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.828559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.828582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.832862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.832898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.832912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.837104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.837138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.837151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.841413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.976 [2024-11-20 08:54:10.841448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.976 [2024-11-20 08:54:10.841461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.976 [2024-11-20 08:54:10.845612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.977 [2024-11-20 08:54:10.845646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-11-20 08:54:10.845659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.977 [2024-11-20 08:54:10.849906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.977 [2024-11-20 08:54:10.849944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-11-20 08:54:10.849957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.977 [2024-11-20 08:54:10.854144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.977 [2024-11-20 08:54:10.854178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-11-20 08:54:10.854192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.977 [2024-11-20 08:54:10.858415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.977 [2024-11-20 08:54:10.858449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-11-20 08:54:10.858461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.977 [2024-11-20 08:54:10.862657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.977 [2024-11-20 08:54:10.862691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-11-20 08:54:10.862704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.977 [2024-11-20 08:54:10.866922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.977 [2024-11-20 08:54:10.866955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-11-20 08:54:10.866968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.977 [2024-11-20 08:54:10.871165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.977 [2024-11-20 08:54:10.871199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-11-20 08:54:10.871212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.977 [2024-11-20 08:54:10.875344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.977 [2024-11-20 08:54:10.875378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-11-20 08:54:10.875391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.977 [2024-11-20 08:54:10.879579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.977 [2024-11-20 08:54:10.879613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-11-20 08:54:10.879626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.977 [2024-11-20 08:54:10.883817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:39.977 [2024-11-20 08:54:10.883851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-11-20 08:54:10.883863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.236 [2024-11-20 08:54:10.888097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:40.237 [2024-11-20 08:54:10.888132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.237 [2024-11-20 08:54:10.888144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.237 [2024-11-20 08:54:10.892337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:40.237 [2024-11-20 08:54:10.892371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.237 [2024-11-20 08:54:10.892384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.237 [2024-11-20 08:54:10.896519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:40.237 [2024-11-20 08:54:10.896553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.237 [2024-11-20 08:54:10.896575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.237 [2024-11-20 08:54:10.900740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:40.237 [2024-11-20 08:54:10.900775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.237 [2024-11-20 08:54:10.900788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.237 [2024-11-20 08:54:10.905054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:40.237 [2024-11-20 08:54:10.905088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.237 [2024-11-20 08:54:10.905100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.237 [2024-11-20 08:54:10.909258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:40.237 [2024-11-20 08:54:10.909293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.237 [2024-11-20 08:54:10.909306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.237 [2024-11-20 08:54:10.913485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:40.237 [2024-11-20 08:54:10.913519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.237 [2024-11-20 08:54:10.913532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.237 [2024-11-20 08:54:10.917684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:40.237 [2024-11-20 08:54:10.917718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.237 [2024-11-20 08:54:10.917730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.237 [2024-11-20 08:54:10.921969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:40.237 [2024-11-20 08:54:10.922003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.237 [2024-11-20 08:54:10.922016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.237 6951.50 IOPS, 868.94 MiB/s [2024-11-20T08:54:11.152Z] [2024-11-20 08:54:10.927835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d2400) 00:20:40.237 [2024-11-20 08:54:10.927868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.237 [2024-11-20 08:54:10.927881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.237 00:20:40.237 Latency(us) 00:20:40.237 [2024-11-20T08:54:11.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.237 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:40.237 nvme0n1 : 2.00 6952.80 869.10 0.00 0.00 2297.67 1869.27 9413.35 00:20:40.237 [2024-11-20T08:54:11.152Z] =================================================================================================================== 00:20:40.237 [2024-11-20T08:54:11.152Z] Total : 6952.80 869.10 0.00 0.00 2297.67 1869.27 9413.35 00:20:40.237 { 00:20:40.237 "results": [ 00:20:40.237 { 00:20:40.237 "job": "nvme0n1", 00:20:40.237 "core_mask": "0x2", 00:20:40.237 "workload": "randread", 00:20:40.237 "status": "finished", 00:20:40.237 "queue_depth": 16, 00:20:40.237 "io_size": 131072, 00:20:40.237 "runtime": 2.004228, 00:20:40.237 "iops": 6952.801777043331, 00:20:40.237 "mibps": 869.1002221304163, 00:20:40.237 "io_failed": 0, 00:20:40.237 "io_timeout": 0, 00:20:40.237 "avg_latency_us": 2297.6677472681604, 00:20:40.237 "min_latency_us": 1869.2654545454545, 00:20:40.237 "max_latency_us": 9413.352727272728 00:20:40.237 } 00:20:40.237 ], 00:20:40.237 "core_count": 1 00:20:40.237 } 00:20:40.237 08:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:40.237 08:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:40.237 08:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:40.237 08:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:40.237 | .driver_specific 00:20:40.237 | .nvme_error 00:20:40.237 | .status_code 00:20:40.237 | .command_transient_transport_error' 00:20:40.496 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 450 > 0 )) 00:20:40.496 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80912 00:20:40.496 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80912 ']' 00:20:40.496 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80912 00:20:40.497 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:40.497 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.497 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80912 00:20:40.497 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:40.497 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:40.497 killing process with pid 80912 00:20:40.497 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80912' 00:20:40.497 Received shutdown signal, test time was about 2.000000 seconds 00:20:40.497 00:20:40.497 Latency(us) 00:20:40.497 [2024-11-20T08:54:11.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.497 [2024-11-20T08:54:11.412Z] =================================================================================================================== 00:20:40.497 [2024-11-20T08:54:11.412Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.497 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80912 00:20:40.497 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80912 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80967 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80967 /var/tmp/bperf.sock 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80967 ']' 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.756 08:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:40.756 [2024-11-20 08:54:11.593785] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:40.756 [2024-11-20 08:54:11.593899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80967 ] 00:20:41.016 [2024-11-20 08:54:11.739487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.016 [2024-11-20 08:54:11.814679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.016 [2024-11-20 08:54:11.886247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:41.980 08:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.980 08:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:41.980 08:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:41.980 08:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:41.980 08:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:41.980 08:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.980 08:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:41.980 08:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.980 08:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:41.980 08:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:42.547 nvme0n1 00:20:42.547 08:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:42.547 08:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.547 08:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.547 08:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.547 08:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:42.547 08:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:42.547 Running I/O for 2 seconds... 00:20:42.547 [2024-11-20 08:54:13.409340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fb048 00:20:42.547 [2024-11-20 08:54:13.410811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.547 [2024-11-20 08:54:13.410852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.547 [2024-11-20 08:54:13.425466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fb8b8 00:20:42.547 [2024-11-20 08:54:13.426896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.547 [2024-11-20 08:54:13.426928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.547 [2024-11-20 08:54:13.441586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fc128 00:20:42.547 [2024-11-20 08:54:13.443015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.547 [2024-11-20 08:54:13.443047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:42.547 [2024-11-20 08:54:13.458251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fc998 00:20:42.547 [2024-11-20 08:54:13.459655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.547 [2024-11-20 08:54:13.459688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.474764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fd208 00:20:42.806 [2024-11-20 08:54:13.476158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.476191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.491258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fda78 00:20:42.806 [2024-11-20 08:54:13.492628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.492661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.507324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fe2e8 00:20:42.806 [2024-11-20 08:54:13.508639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.508671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.523383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166feb58 00:20:42.806 [2024-11-20 08:54:13.524703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.524739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.546224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fef90 00:20:42.806 [2024-11-20 08:54:13.548757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.548793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.562233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166feb58 00:20:42.806 [2024-11-20 08:54:13.564733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.564767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.578268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fe2e8 00:20:42.806 [2024-11-20 08:54:13.580743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.580777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.594191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fda78 00:20:42.806 [2024-11-20 08:54:13.596655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.596688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.610680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fd208 00:20:42.806 [2024-11-20 08:54:13.613157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.613188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.626916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fc998 00:20:42.806 [2024-11-20 08:54:13.629339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.629371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.643004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fc128 00:20:42.806 [2024-11-20 08:54:13.645413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.645449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.659037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fb8b8 00:20:42.806 [2024-11-20 08:54:13.661439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.806 [2024-11-20 08:54:13.661472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:42.806 [2024-11-20 08:54:13.675098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fb048 00:20:42.807 [2024-11-20 08:54:13.677470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.807 [2024-11-20 08:54:13.677499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.807 [2024-11-20 08:54:13.691106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166fa7d8 00:20:42.807 [2024-11-20 08:54:13.693447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.807 [2024-11-20 08:54:13.693480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:42.807 [2024-11-20 08:54:13.707100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f9f68 00:20:42.807 [2024-11-20 08:54:13.709438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.807 [2024-11-20 08:54:13.709470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.723159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f96f8 00:20:43.066 [2024-11-20 08:54:13.725464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.725496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.739175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f8e88 00:20:43.066 [2024-11-20 08:54:13.741458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.741490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.755136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f8618 00:20:43.066 [2024-11-20 08:54:13.757404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.757439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.771171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f7da8 00:20:43.066 [2024-11-20 08:54:13.773428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.773461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.787156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f7538 00:20:43.066 [2024-11-20 08:54:13.789379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.789412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.803137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f6cc8 00:20:43.066 [2024-11-20 08:54:13.805345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.805378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.819190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f6458 00:20:43.066 [2024-11-20 08:54:13.821417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.821465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.835569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f5be8 00:20:43.066 [2024-11-20 08:54:13.837735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.837775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.851607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f5378 00:20:43.066 [2024-11-20 08:54:13.853771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.853816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.867695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f4b08 00:20:43.066 [2024-11-20 08:54:13.869850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.869880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.883764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f4298 00:20:43.066 [2024-11-20 08:54:13.885886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.885917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.899769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f3a28 00:20:43.066 [2024-11-20 08:54:13.901889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.901920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.915762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f31b8 00:20:43.066 [2024-11-20 08:54:13.917847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.917876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.931729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f2948 00:20:43.066 [2024-11-20 08:54:13.933810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.933851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.947821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f20d8 00:20:43.066 [2024-11-20 08:54:13.949847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.949879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:43.066 [2024-11-20 08:54:13.963903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f1868 00:20:43.066 [2024-11-20 08:54:13.965927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.066 [2024-11-20 08:54:13.965959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:43.325 [2024-11-20 08:54:13.979969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f0ff8 00:20:43.325 [2024-11-20 08:54:13.981973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.325 [2024-11-20 08:54:13.982005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:43.325 [2024-11-20 08:54:13.996013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f0788 00:20:43.325 [2024-11-20 08:54:13.997997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.325 [2024-11-20 08:54:13.998029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:43.325 [2024-11-20 08:54:14.012009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166eff18 00:20:43.325 [2024-11-20 08:54:14.013982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.325 [2024-11-20 08:54:14.014013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:43.325 [2024-11-20 08:54:14.028057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ef6a8 00:20:43.325 [2024-11-20 08:54:14.029999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.325 [2024-11-20 08:54:14.030030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:43.325 [2024-11-20 08:54:14.044056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166eee38 00:20:43.325 [2024-11-20 08:54:14.046004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.325 [2024-11-20 08:54:14.046036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:43.325 [2024-11-20 08:54:14.060155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ee5c8 00:20:43.325 [2024-11-20 08:54:14.062062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.325 [2024-11-20 08:54:14.062093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:43.325 [2024-11-20 08:54:14.076233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166edd58 00:20:43.325 [2024-11-20 08:54:14.078128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.325 [2024-11-20 08:54:14.078159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:43.325 [2024-11-20 08:54:14.092300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ed4e8 00:20:43.325 [2024-11-20 08:54:14.094162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.325 [2024-11-20 08:54:14.094191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:43.325 [2024-11-20 08:54:14.108274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ecc78 00:20:43.326 [2024-11-20 08:54:14.110116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.326 [2024-11-20 08:54:14.110146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:43.326 [2024-11-20 08:54:14.124276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ec408 00:20:43.326 [2024-11-20 08:54:14.126100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.326 [2024-11-20 08:54:14.126131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:43.326 [2024-11-20 08:54:14.140236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ebb98 00:20:43.326 [2024-11-20 08:54:14.142046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.326 [2024-11-20 08:54:14.142076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:43.326 [2024-11-20 08:54:14.156279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166eb328 00:20:43.326 [2024-11-20 08:54:14.158075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.326 [2024-11-20 08:54:14.158107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:43.326 [2024-11-20 08:54:14.172329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166eaab8 00:20:43.326 [2024-11-20 08:54:14.174106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.326 [2024-11-20 08:54:14.174138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:43.326 [2024-11-20 08:54:14.188350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ea248 00:20:43.326 [2024-11-20 08:54:14.190101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.326 [2024-11-20 08:54:14.190131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.326 [2024-11-20 08:54:14.204336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e99d8 00:20:43.326 [2024-11-20 08:54:14.206066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.326 [2024-11-20 08:54:14.206096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:43.326 [2024-11-20 08:54:14.220311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e9168 00:20:43.326 [2024-11-20 08:54:14.222021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.326 [2024-11-20 08:54:14.222052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:43.326 [2024-11-20 08:54:14.236266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e88f8 00:20:43.326 [2024-11-20 08:54:14.237959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.326 [2024-11-20 08:54:14.237990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.252256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e8088 00:20:43.585 [2024-11-20 08:54:14.253928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.253959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.268197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e7818 00:20:43.585 [2024-11-20 08:54:14.269847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.269877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.284158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e6fa8 00:20:43.585 [2024-11-20 08:54:14.285782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.285821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.300098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e6738 00:20:43.585 [2024-11-20 08:54:14.301699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.301730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.316066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e5ec8 00:20:43.585 [2024-11-20 08:54:14.317648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.317679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.332027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e5658 00:20:43.585 [2024-11-20 08:54:14.333596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.333627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.348051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e4de8 00:20:43.585 [2024-11-20 08:54:14.349591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.349622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.363992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e4578 00:20:43.585 [2024-11-20 08:54:14.365510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.365540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.379988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e3d08 00:20:43.585 [2024-11-20 08:54:14.381501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.381669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:43.585 15688.00 IOPS, 61.28 MiB/s [2024-11-20T08:54:14.500Z] [2024-11-20 08:54:14.397787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e3498 00:20:43.585 [2024-11-20 08:54:14.399464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.399635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.414489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e2c28 00:20:43.585 [2024-11-20 08:54:14.416142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.416315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.431069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e23b8 00:20:43.585 [2024-11-20 08:54:14.432682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.432880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.447526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e1b48 00:20:43.585 [2024-11-20 08:54:14.449145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.449323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.464081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e12d8 00:20:43.585 [2024-11-20 08:54:14.465671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.465867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.480739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e0a68 00:20:43.585 [2024-11-20 08:54:14.482312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.585 [2024-11-20 08:54:14.482496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:43.585 [2024-11-20 08:54:14.497328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e01f8 00:20:43.844 [2024-11-20 08:54:14.498867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.844 [2024-11-20 08:54:14.499039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:43.844 [2024-11-20 08:54:14.513911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166df988 00:20:43.844 [2024-11-20 08:54:14.515274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.844 [2024-11-20 08:54:14.515312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:43.844 [2024-11-20 08:54:14.529950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166df118 00:20:43.844 [2024-11-20 08:54:14.531278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.844 [2024-11-20 08:54:14.531314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:43.844 [2024-11-20 08:54:14.545946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166de8a8 00:20:43.844 [2024-11-20 08:54:14.547253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.844 [2024-11-20 08:54:14.547289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:43.844 [2024-11-20 08:54:14.561939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166de038 00:20:43.845 [2024-11-20 08:54:14.563221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.563255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:43.845 [2024-11-20 08:54:14.584822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166de038 00:20:43.845 [2024-11-20 08:54:14.587318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.587353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:43.845 [2024-11-20 08:54:14.601197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166de8a8 00:20:43.845 [2024-11-20 08:54:14.603704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.603740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:43.845 [2024-11-20 08:54:14.617539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166df118 00:20:43.845 [2024-11-20 08:54:14.620106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.620141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:43.845 [2024-11-20 08:54:14.633812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166df988 00:20:43.845 [2024-11-20 08:54:14.636284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.636334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:43.845 [2024-11-20 08:54:14.650369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e01f8 00:20:43.845 [2024-11-20 08:54:14.652862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.652898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:43.845 [2024-11-20 08:54:14.666943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e0a68 00:20:43.845 [2024-11-20 08:54:14.669409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.669445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:43.845 [2024-11-20 08:54:14.683288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e12d8 00:20:43.845 [2024-11-20 08:54:14.685680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.685715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:43.845 [2024-11-20 08:54:14.699287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e1b48 00:20:43.845 [2024-11-20 08:54:14.701676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.701841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.845 [2024-11-20 08:54:14.715410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e23b8 00:20:43.845 [2024-11-20 08:54:14.717761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.717930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:43.845 [2024-11-20 08:54:14.731521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e2c28 00:20:43.845 [2024-11-20 08:54:14.733859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.733895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:43.845 [2024-11-20 08:54:14.747519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e3498 00:20:43.845 [2024-11-20 08:54:14.749844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.845 [2024-11-20 08:54:14.749879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.763510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e3d08 00:20:44.104 [2024-11-20 08:54:14.765810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.765845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.779481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e4578 00:20:44.104 [2024-11-20 08:54:14.781755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.781925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.795643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e4de8 00:20:44.104 [2024-11-20 08:54:14.797935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.797971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.811738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e5658 00:20:44.104 [2024-11-20 08:54:14.814005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.814043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.827767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e5ec8 00:20:44.104 [2024-11-20 08:54:14.830005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.830042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.843822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e6738 00:20:44.104 [2024-11-20 08:54:14.846030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.846067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.859855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e6fa8 00:20:44.104 [2024-11-20 08:54:14.862032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.862070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.875907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e7818 00:20:44.104 [2024-11-20 08:54:14.878067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.878103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.891945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e8088 00:20:44.104 [2024-11-20 08:54:14.894090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.894127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.907986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e88f8 00:20:44.104 [2024-11-20 08:54:14.910116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.910152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.923973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e9168 00:20:44.104 [2024-11-20 08:54:14.926079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.926108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.940059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166e99d8 00:20:44.104 [2024-11-20 08:54:14.942168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.942198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.956117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ea248 00:20:44.104 [2024-11-20 08:54:14.958182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.958212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.972151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166eaab8 00:20:44.104 [2024-11-20 08:54:14.974193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.974224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:14.988510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166eb328 00:20:44.104 [2024-11-20 08:54:14.990617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:14.990646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:44.104 [2024-11-20 08:54:15.004745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ebb98 00:20:44.104 [2024-11-20 08:54:15.006753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.104 [2024-11-20 08:54:15.006793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.020774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ec408 00:20:44.364 [2024-11-20 08:54:15.022763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.022792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.036879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ecc78 00:20:44.364 [2024-11-20 08:54:15.038841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.038871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.052968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ed4e8 00:20:44.364 [2024-11-20 08:54:15.054935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.054981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.069209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166edd58 00:20:44.364 [2024-11-20 08:54:15.071124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.071155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.085363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ee5c8 00:20:44.364 [2024-11-20 08:54:15.087262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.087291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.101382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166eee38 00:20:44.364 [2024-11-20 08:54:15.103271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.103300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.117428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166ef6a8 00:20:44.364 [2024-11-20 08:54:15.119291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.119320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.133457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166eff18 00:20:44.364 [2024-11-20 08:54:15.135316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.135345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.149552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f0788 00:20:44.364 [2024-11-20 08:54:15.151392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.151422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.165661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f0ff8 00:20:44.364 [2024-11-20 08:54:15.167478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.167510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.181932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f1868 00:20:44.364 [2024-11-20 08:54:15.183705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.183736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.197980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f20d8 00:20:44.364 [2024-11-20 08:54:15.199792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.199834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.214286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f2948 00:20:44.364 [2024-11-20 08:54:15.216067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.216097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.230618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f31b8 00:20:44.364 [2024-11-20 08:54:15.232357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.232387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.246677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f3a28 00:20:44.364 [2024-11-20 08:54:15.248397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.248429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:44.364 [2024-11-20 08:54:15.263053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f4298 00:20:44.364 [2024-11-20 08:54:15.264775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.364 [2024-11-20 08:54:15.264822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:44.623 [2024-11-20 08:54:15.279132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f4b08 00:20:44.623 [2024-11-20 08:54:15.280820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.623 [2024-11-20 08:54:15.280851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:44.623 [2024-11-20 08:54:15.295351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f5378 00:20:44.623 [2024-11-20 08:54:15.297025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.623 [2024-11-20 08:54:15.297056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:44.623 [2024-11-20 08:54:15.311358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f5be8 00:20:44.623 [2024-11-20 08:54:15.313003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.623 [2024-11-20 08:54:15.313035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:44.623 [2024-11-20 08:54:15.327410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f6458 00:20:44.623 [2024-11-20 08:54:15.329049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.623 [2024-11-20 08:54:15.329082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:44.623 [2024-11-20 08:54:15.343429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f6cc8 00:20:44.623 [2024-11-20 08:54:15.345051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.623 [2024-11-20 08:54:15.345082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:44.623 [2024-11-20 08:54:15.359647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f7538 00:20:44.623 [2024-11-20 08:54:15.361244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.623 [2024-11-20 08:54:15.361275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:44.623 [2024-11-20 08:54:15.375977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f7da8 00:20:44.623 [2024-11-20 08:54:15.377578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.623 [2024-11-20 08:54:15.377613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:44.623 [2024-11-20 08:54:15.392958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc5b0) with pdu=0x2000166f8618 00:20:44.623 15624.00 IOPS, 61.03 MiB/s [2024-11-20T08:54:15.538Z] [2024-11-20 08:54:15.394532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.623 [2024-11-20 08:54:15.394567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:44.623 00:20:44.623 Latency(us) 00:20:44.623 [2024-11-20T08:54:15.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.623 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:44.623 nvme0n1 : 2.00 15655.70 61.16 0.00 0.00 8167.48 5659.93 30504.03 00:20:44.623 [2024-11-20T08:54:15.538Z] =================================================================================================================== 00:20:44.623 [2024-11-20T08:54:15.538Z] Total : 15655.70 61.16 0.00 0.00 8167.48 5659.93 30504.03 00:20:44.623 { 00:20:44.623 "results": [ 00:20:44.623 { 00:20:44.623 "job": "nvme0n1", 00:20:44.623 "core_mask": "0x2", 00:20:44.623 "workload": "randwrite", 00:20:44.623 "status": "finished", 00:20:44.623 "queue_depth": 128, 00:20:44.623 "io_size": 4096, 00:20:44.624 "runtime": 2.004126, 00:20:44.624 "iops": 15655.702286183603, 00:20:44.624 "mibps": 61.1550870554047, 00:20:44.624 "io_failed": 0, 00:20:44.624 "io_timeout": 0, 00:20:44.624 "avg_latency_us": 8167.480680543322, 00:20:44.624 "min_latency_us": 5659.927272727273, 00:20:44.624 "max_latency_us": 30504.02909090909 00:20:44.624 } 00:20:44.624 ], 00:20:44.624 "core_count": 1 00:20:44.624 } 00:20:44.624 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:44.624 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:44.624 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:44.624 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:44.624 | .driver_specific 00:20:44.624 | .nvme_error 00:20:44.624 | .status_code 00:20:44.624 | .command_transient_transport_error' 00:20:44.882 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:20:44.882 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80967 00:20:44.882 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80967 ']' 00:20:44.882 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80967 00:20:44.882 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:44.882 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.882 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80967 00:20:44.882 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:44.882 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:44.882 killing process with pid 80967 00:20:44.882 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80967' 00:20:44.882 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80967 00:20:44.882 Received shutdown signal, test time was about 2.000000 seconds 00:20:44.882 00:20:44.882 Latency(us) 00:20:44.882 [2024-11-20T08:54:15.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.883 [2024-11-20T08:54:15.798Z] =================================================================================================================== 00:20:44.883 [2024-11-20T08:54:15.798Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.883 08:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80967 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=81033 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 81033 /var/tmp/bperf.sock 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81033 ']' 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.141 08:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:45.400 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:45.400 Zero copy mechanism will not be used. 00:20:45.400 [2024-11-20 08:54:16.087640] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:45.400 [2024-11-20 08:54:16.087762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81033 ] 00:20:45.400 [2024-11-20 08:54:16.235939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.400 [2024-11-20 08:54:16.313778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.658 [2024-11-20 08:54:16.386038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:46.224 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.224 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:46.224 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:46.224 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:46.483 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:46.483 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.483 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.740 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.740 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:46.740 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:46.999 nvme0n1 00:20:46.999 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:46.999 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.999 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.999 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.999 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:46.999 08:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:46.999 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:46.999 Zero copy mechanism will not be used. 00:20:46.999 Running I/O for 2 seconds... 00:20:46.999 [2024-11-20 08:54:17.855044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:46.999 [2024-11-20 08:54:17.855135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-11-20 08:54:17.855168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:46.999 [2024-11-20 08:54:17.860475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:46.999 [2024-11-20 08:54:17.860581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-11-20 08:54:17.860607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:46.999 [2024-11-20 08:54:17.865675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:46.999 [2024-11-20 08:54:17.865774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-11-20 08:54:17.865812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:46.999 [2024-11-20 08:54:17.870838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:46.999 [2024-11-20 08:54:17.870928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-11-20 08:54:17.870952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:46.999 [2024-11-20 08:54:17.876088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:46.999 [2024-11-20 08:54:17.876186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-11-20 08:54:17.876217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:46.999 [2024-11-20 08:54:17.881343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:46.999 [2024-11-20 08:54:17.881443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-11-20 08:54:17.881465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:46.999 [2024-11-20 08:54:17.886638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:46.999 [2024-11-20 08:54:17.886728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-11-20 08:54:17.886751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:46.999 [2024-11-20 08:54:17.892175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:46.999 [2024-11-20 08:54:17.892271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-11-20 08:54:17.892294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:46.999 [2024-11-20 08:54:17.897354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:46.999 [2024-11-20 08:54:17.897489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-11-20 08:54:17.897512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:46.999 [2024-11-20 08:54:17.902578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:46.999 [2024-11-20 08:54:17.902685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-11-20 08:54:17.902708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:46.999 [2024-11-20 08:54:17.907759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:46.999 [2024-11-20 08:54:17.907842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-11-20 08:54:17.907865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.258 [2024-11-20 08:54:17.913114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.258 [2024-11-20 08:54:17.913210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.258 [2024-11-20 08:54:17.913233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.258 [2024-11-20 08:54:17.918871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.258 [2024-11-20 08:54:17.918978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.258 [2024-11-20 08:54:17.919000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.258 [2024-11-20 08:54:17.924411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.258 [2024-11-20 08:54:17.924521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.258 [2024-11-20 08:54:17.924544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.258 [2024-11-20 08:54:17.929633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.258 [2024-11-20 08:54:17.929769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.258 [2024-11-20 08:54:17.929792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.258 [2024-11-20 08:54:17.934863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.258 [2024-11-20 08:54:17.934927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.934949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.940064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.940160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.940184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.945310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.945400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.945423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.950524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.950622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.950644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.955950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.956039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.956063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.961213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.961282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.961304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.966617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.966720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.966745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.972030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.972114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.972138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.977502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.977631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.977656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.983008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.983100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.983124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.988266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.988377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.988400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.993613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.993689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.993713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:17.998851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:17.998945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:17.998969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.004149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.004237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.004261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.009604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.009689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.009714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.014940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.015013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.015036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.020337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.020427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.020451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.025729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.025801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.025842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.031000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.031114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.031138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.036321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.036419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.036442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.041757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.041850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.041876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.047060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.047159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.047184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.052460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.052577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.052602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.058050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.058158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.058181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.063390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.063494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.063517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.068689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.068781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.068804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.073932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.259 [2024-11-20 08:54:18.074036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.259 [2024-11-20 08:54:18.074059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.259 [2024-11-20 08:54:18.079103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.079201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.079223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.084390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.084525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.084549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.090047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.090115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.090137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.095382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.095509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.095548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.100812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.100907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.100964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.106290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.106374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.106397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.111647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.111718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.111744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.117132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.117232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.117256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.122340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.122466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.122499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.127894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.127970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.127991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.133401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.133507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.133562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.139117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.139182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.139203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.144553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.144674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.144696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.150096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.150160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.150181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.155427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.155568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.155592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.161036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.161150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.161173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.260 [2024-11-20 08:54:18.166449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.260 [2024-11-20 08:54:18.166561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.260 [2024-11-20 08:54:18.166583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.519 [2024-11-20 08:54:18.171612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.171693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.171714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.177178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.177252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.177273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.182708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.182803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.182826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.188476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.188628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.188651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.194013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.194110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.194133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.199579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.199649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.199672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.205282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.205377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.205414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.210844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.210959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.210981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.216457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.216527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.216549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.221983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.222089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.222112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.227433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.227522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.227545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.232830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.232956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.232978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.238289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.238385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.238439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.243780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.243858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.243911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.249352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.249477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.249499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.254962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.255058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.255081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.260282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.260377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.260400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.265825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.265931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.265984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.271471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.271571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.271594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.277000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.277106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.277129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.282336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.282429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.282452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.287512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.287617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.287640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.292816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.292919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.292942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.297943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.298011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.298033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.303172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.303269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.303291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.308686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.520 [2024-11-20 08:54:18.308783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.520 [2024-11-20 08:54:18.308818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.520 [2024-11-20 08:54:18.313998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.314144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.314166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.319509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.319613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.319634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.324917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.324986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.325010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.330613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.330722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.330744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.335731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.335840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.335863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.340951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.341083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.341106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.346360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.346484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.346506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.351545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.351615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.351638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.356684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.356779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.356816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.361826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.361914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.361936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.366963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.367060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.367081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.372236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.372317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.372339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.377701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.377770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.377791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.382979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.383075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.383097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.388054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.388152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.388174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.393198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.393286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.393308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.398330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.398429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.398451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.403464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.403553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.403575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.408612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.408682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.408704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.413849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.413928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.413951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.419174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.419271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.419293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.424662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.424759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.424783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.521 [2024-11-20 08:54:18.430073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.521 [2024-11-20 08:54:18.430143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.521 [2024-11-20 08:54:18.430166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.781 [2024-11-20 08:54:18.435565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.781 [2024-11-20 08:54:18.435656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.781 [2024-11-20 08:54:18.435678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.781 [2024-11-20 08:54:18.440842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.781 [2024-11-20 08:54:18.440909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.781 [2024-11-20 08:54:18.440931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.446056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.446151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.446174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.451173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.451268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.451301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.456455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.456532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.456554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.461749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.461844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.461867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.467134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.467230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.467252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.472600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.472668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.472691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.477886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.477984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.478006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.483058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.483153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.483176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.488554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.488639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.488661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.494095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.494167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.494189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.499527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.499608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.499630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.505161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.505230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.505252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.510757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.510850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.510872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.516470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.516598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.516621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.521925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.522013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.522033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.527497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.527578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.527599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.533052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.533163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.533187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.538508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.538617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.538638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.543761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.543865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.543888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.549044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.549149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.549172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.554092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.554188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.554211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.559383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.559481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.559508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.564668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.564736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.564758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.570088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.570185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.570208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.575436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.575533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.575556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.580740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.580857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.782 [2024-11-20 08:54:18.580879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.782 [2024-11-20 08:54:18.586000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.782 [2024-11-20 08:54:18.586111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.586133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.591240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.591335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.591358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.596639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.596707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.596730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.602201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.602280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.602302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.607502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.607603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.607625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.612876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.612982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.613004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.618242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.618325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.618348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.623637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.623729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.623751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.628909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.629019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.629056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.634364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.634488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.634511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.639833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.639954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.639976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.645030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.645106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.645129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.650210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.650307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.650329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.655348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.655437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.655459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.660554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.660632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.660654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.665777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.665888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.665910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.670969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.671063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.671085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.676096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.676185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.676207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.681349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.681444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.681467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.686534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.686623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.686646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:47.783 [2024-11-20 08:54:18.691656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:47.783 [2024-11-20 08:54:18.691743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.783 [2024-11-20 08:54:18.691766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.696758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.696864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.696887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.701996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.702079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.702102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.707076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.707149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.707171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.712286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.712383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.712406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.717640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.717744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.717766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.722970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.723077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.723099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.728280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.728374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.728397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.733628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.733710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.733733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.739126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.739223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.739246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.744548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.744671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.744693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.750083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.750179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.750201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.755399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.755480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.755502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.760548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.760643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.760666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.765718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.765784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.765820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.771032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.771099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.771121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.776225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.776301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.776323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.781365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.781433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.781461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.786522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.786632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.786655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.791687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.791793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.791815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.796775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.796883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.044 [2024-11-20 08:54:18.796906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.044 [2024-11-20 08:54:18.801901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.044 [2024-11-20 08:54:18.801981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.802004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.807133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.807236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.807259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.812330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.812418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.812440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.817486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.817567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.817590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.822776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.822871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.822893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.827946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.828041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.828063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.833151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.833246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.833268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.838230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.838325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.838348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.843336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.843432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.843456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.045 5749.00 IOPS, 718.62 MiB/s [2024-11-20T08:54:18.960Z] [2024-11-20 08:54:18.849900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.849998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.850022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.854995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.855092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.855114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.860114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.860185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.860208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.865252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.865322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.865346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.870375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.870475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.870504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.875523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.875592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.875615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.880684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.880776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.880811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.885820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.885909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.885932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.890941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.891038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.891061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.896016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.896114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.896136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.901142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.901220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.901242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.906270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.906337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.906360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.911340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.911409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.911431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.916404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.916477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.916500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.921545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.921620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.921643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.926650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.926744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.045 [2024-11-20 08:54:18.926773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.045 [2024-11-20 08:54:18.931747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.045 [2024-11-20 08:54:18.931850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.046 [2024-11-20 08:54:18.931873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.046 [2024-11-20 08:54:18.936908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.046 [2024-11-20 08:54:18.936999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.046 [2024-11-20 08:54:18.937021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.046 [2024-11-20 08:54:18.942060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.046 [2024-11-20 08:54:18.942149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.046 [2024-11-20 08:54:18.942172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.046 [2024-11-20 08:54:18.947224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.046 [2024-11-20 08:54:18.947297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.046 [2024-11-20 08:54:18.947331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.046 [2024-11-20 08:54:18.952362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.046 [2024-11-20 08:54:18.952451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.046 [2024-11-20 08:54:18.952474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:18.957498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:18.957585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:18.957608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:18.962634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:18.962733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:18.962755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:18.967790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:18.967902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:18.967927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:18.973108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:18.973207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:18.973231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:18.978285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:18.978369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:18.978393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:18.983453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:18.983524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:18.983548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:18.988608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:18.988678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:18.988701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:18.993725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:18.993840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:18.993875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:18.998857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:18.998942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:18.998976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:19.003956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:19.004053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:19.004075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:19.009096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:19.009168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:19.009190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:19.014186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:19.014286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:19.014315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:19.019311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:19.019379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:19.019402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:19.024479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:19.024591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:19.024613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.306 [2024-11-20 08:54:19.029576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.306 [2024-11-20 08:54:19.029674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.306 [2024-11-20 08:54:19.029697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.034667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.034765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.034787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.039815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.039890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.039913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.044946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.045043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.045077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.050069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.050163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.050186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.055184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.055281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.055303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.060239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.060330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.060351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.065409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.065488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.065510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.070498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.070593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.070615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.075625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.075697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.075719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.080745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.080843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.080865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.085958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.086087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.086108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.091327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.091432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.091455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.096722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.096832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.096855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.102253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.102348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.102370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.107640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.107762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.107784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.113045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.113115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.113137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.118275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.118386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.118408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.123606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.123709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.123732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.128997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.129088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.129110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.134348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.134472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.134494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.139717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.139808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.139831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.145346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.145541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.145562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.151105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.151207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.151231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.156450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.156531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.156581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.161900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.162015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.162038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.167462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.167597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.307 [2024-11-20 08:54:19.167619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.307 [2024-11-20 08:54:19.172984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.307 [2024-11-20 08:54:19.173107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.308 [2024-11-20 08:54:19.173128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.308 [2024-11-20 08:54:19.178280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.308 [2024-11-20 08:54:19.178375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.308 [2024-11-20 08:54:19.178397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.308 [2024-11-20 08:54:19.183659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.308 [2024-11-20 08:54:19.183740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.308 [2024-11-20 08:54:19.183762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.308 [2024-11-20 08:54:19.189099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.308 [2024-11-20 08:54:19.189181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.308 [2024-11-20 08:54:19.189203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.308 [2024-11-20 08:54:19.194590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.308 [2024-11-20 08:54:19.194714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.308 [2024-11-20 08:54:19.194735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.308 [2024-11-20 08:54:19.200063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.308 [2024-11-20 08:54:19.200182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.308 [2024-11-20 08:54:19.200204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.308 [2024-11-20 08:54:19.205607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.308 [2024-11-20 08:54:19.205715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.308 [2024-11-20 08:54:19.205737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.308 [2024-11-20 08:54:19.211255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.308 [2024-11-20 08:54:19.211382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.308 [2024-11-20 08:54:19.211420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.308 [2024-11-20 08:54:19.216861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.308 [2024-11-20 08:54:19.216985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.308 [2024-11-20 08:54:19.217006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.222209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.222333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.222354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.227654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.227762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.227783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.233255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.233340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.233362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.238552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.238673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.238710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.243938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.244033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.244055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.249260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.249355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.249377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.254698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.254803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.254825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.260152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.260273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.260295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.265709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.265815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.265838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.271120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.271238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.271261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.276409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.276497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.276519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.281888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.282011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.282032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.287399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.287529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.287557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.293043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.567 [2024-11-20 08:54:19.293112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.567 [2024-11-20 08:54:19.293135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.567 [2024-11-20 08:54:19.298420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.298515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.298538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.304011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.304121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.304144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.309241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.309363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.309385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.314399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.314507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.314528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.319633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.319718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.319740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.325035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.325099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.325136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.330245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.330361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.330383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.335615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.335698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.335720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.340838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.340901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.340923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.346144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.346213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.346235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.351362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.351430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.351452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.356584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.356689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.356712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.361804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.361905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.361928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.367053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.367143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.367165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.372311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.372417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.372439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.377684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.377793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.377815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.383034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.383141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.383164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.388357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.388425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.388448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.393776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.393879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.393915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.399135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.399224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.399246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.404466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.404534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.404557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.409898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.410007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.410029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.415226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.415319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.415341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.420517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.420598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.420620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.425879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.425945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.425967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.431035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.431124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.431148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.436318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.436437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.436459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.441433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.441501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.441524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.446633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.446701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.446723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.451838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.451918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.451940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.456993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.457063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.457084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.462010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.462078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.462100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.467187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.467281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.467303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.472289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.472358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.568 [2024-11-20 08:54:19.472380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.568 [2024-11-20 08:54:19.477380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.568 [2024-11-20 08:54:19.477449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.569 [2024-11-20 08:54:19.477472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.828 [2024-11-20 08:54:19.482457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.828 [2024-11-20 08:54:19.482546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.828 [2024-11-20 08:54:19.482569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.828 [2024-11-20 08:54:19.487679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.828 [2024-11-20 08:54:19.487746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.828 [2024-11-20 08:54:19.487768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.828 [2024-11-20 08:54:19.492833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.828 [2024-11-20 08:54:19.492921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.828 [2024-11-20 08:54:19.492943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.828 [2024-11-20 08:54:19.497943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.828 [2024-11-20 08:54:19.498023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.828 [2024-11-20 08:54:19.498046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.828 [2024-11-20 08:54:19.503043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.828 [2024-11-20 08:54:19.503116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.828 [2024-11-20 08:54:19.503138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.508159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.508233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.508255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.513243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.513331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.513354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.518306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.518402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.518424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.523400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.523469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.523492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.528482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.528549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.528588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.533539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.533664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.533686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.538571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.538665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.538687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.543687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.543780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.543817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.548868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.548934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.548956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.553933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.554012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.554034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.558963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.559057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.559085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.564015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.564110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.564132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.569123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.569218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.569241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.574203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.574299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.574322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.579284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.579373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.579396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.584442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.584530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.584553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.589692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.589789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.589812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.594851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.594950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.594972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.600116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.600184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.600207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.605207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.605275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.605297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.610425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.610529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.610551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.615579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.615714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.615737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.620887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.620977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.620999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.626048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.626126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.626148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.631265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.631391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.631412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.829 [2024-11-20 08:54:19.636444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.829 [2024-11-20 08:54:19.636531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.829 [2024-11-20 08:54:19.636553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.641687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.641809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.641831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.646793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.646889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.646911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.651899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.651966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.651988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.657020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.657087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.657109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.662136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.662206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.662228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.667220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.667308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.667330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.672369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.672477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.672500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.677434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.677517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.677539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.682558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.682645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.682667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.687618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.687706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.687728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.692718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.692818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.692841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.697821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.697885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.697908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.702880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.702967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.702990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.707953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.708019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.708040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.713031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.713099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.713121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.718112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.718205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.718227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.723156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.723243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.723265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.728230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.728324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.728348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.733294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.733395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.733417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.830 [2024-11-20 08:54:19.738384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:48.830 [2024-11-20 08:54:19.738478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.830 [2024-11-20 08:54:19.738500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:49.090 [2024-11-20 08:54:19.743437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.090 [2024-11-20 08:54:19.743503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.090 [2024-11-20 08:54:19.743526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:49.090 [2024-11-20 08:54:19.748468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.090 [2024-11-20 08:54:19.748542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.090 [2024-11-20 08:54:19.748578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:49.090 [2024-11-20 08:54:19.753536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.090 [2024-11-20 08:54:19.753630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.090 [2024-11-20 08:54:19.753653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:49.090 [2024-11-20 08:54:19.758629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.090 [2024-11-20 08:54:19.758717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.090 [2024-11-20 08:54:19.758739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:49.090 [2024-11-20 08:54:19.763686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.763773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.763795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.768776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.768886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.768907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.773784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.773863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.773884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.778930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.779019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.779042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.784053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.784147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.784170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.789115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.789202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.789225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.794146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.794234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.794256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.799218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.799315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.799338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.804324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.804391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.804413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.809441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.809550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.809573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.814647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.814735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.814757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.819863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.819931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.819953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.825040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.825136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.825158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.830246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.830360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.830383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.835319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.835410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.835432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:49.091 [2024-11-20 08:54:19.840680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.840785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.840808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:49.091 5843.00 IOPS, 730.38 MiB/s [2024-11-20T08:54:20.006Z] [2024-11-20 08:54:19.847366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20cc750) with pdu=0x2000166ff3c8 00:20:49.091 [2024-11-20 08:54:19.847445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.091 [2024-11-20 08:54:19.847469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:49.091 00:20:49.091 Latency(us) 00:20:49.091 [2024-11-20T08:54:20.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.091 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:49.091 nvme0n1 : 2.00 5838.82 729.85 0.00 0.00 2734.25 2278.87 13285.93 00:20:49.091 [2024-11-20T08:54:20.006Z] =================================================================================================================== 00:20:49.091 [2024-11-20T08:54:20.006Z] Total : 5838.82 729.85 0.00 0.00 2734.25 2278.87 13285.93 00:20:49.091 { 00:20:49.091 "results": [ 00:20:49.091 { 00:20:49.091 "job": "nvme0n1", 00:20:49.091 "core_mask": "0x2", 00:20:49.091 "workload": "randwrite", 00:20:49.091 "status": "finished", 00:20:49.091 "queue_depth": 16, 00:20:49.091 "io_size": 131072, 00:20:49.091 "runtime": 2.004344, 00:20:49.091 "iops": 5838.818087114787, 00:20:49.091 "mibps": 729.8522608893484, 00:20:49.091 "io_failed": 0, 00:20:49.091 "io_timeout": 0, 00:20:49.091 "avg_latency_us": 2734.250622917201, 00:20:49.091 "min_latency_us": 2278.8654545454547, 00:20:49.091 "max_latency_us": 13285.934545454546 00:20:49.091 } 00:20:49.091 ], 00:20:49.091 "core_count": 1 00:20:49.091 } 00:20:49.091 08:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:49.091 08:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:49.091 08:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:49.091 08:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:49.091 | .driver_specific 00:20:49.091 | .nvme_error 00:20:49.091 | .status_code 00:20:49.091 | .command_transient_transport_error' 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 378 > 0 )) 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 81033 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81033 ']' 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81033 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81033 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:49.350 killing process with pid 81033 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81033' 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81033 00:20:49.350 Received shutdown signal, test time was about 2.000000 seconds 00:20:49.350 00:20:49.350 Latency(us) 00:20:49.350 [2024-11-20T08:54:20.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.350 [2024-11-20T08:54:20.265Z] =================================================================================================================== 00:20:49.350 [2024-11-20T08:54:20.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:49.350 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81033 00:20:49.609 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80814 00:20:49.609 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80814 ']' 00:20:49.609 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80814 00:20:49.609 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:49.609 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.609 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80814 00:20:49.609 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:49.609 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:49.609 killing process with pid 80814 00:20:49.609 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80814' 00:20:49.609 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80814 00:20:49.609 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80814 00:20:49.868 00:20:49.868 real 0m19.604s 00:20:49.868 user 0m38.606s 00:20:49.868 sys 0m4.936s 00:20:49.868 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.868 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:49.868 ************************************ 00:20:49.868 END TEST nvmf_digest_error 00:20:49.868 ************************************ 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.127 rmmod nvme_tcp 00:20:50.127 rmmod nvme_fabrics 00:20:50.127 rmmod nvme_keyring 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80814 ']' 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80814 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80814 ']' 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80814 00:20:50.127 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80814) - No such process 00:20:50.127 Process with pid 80814 is not found 00:20:50.127 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80814 is not found' 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:50.128 08:54:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:50.128 08:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:50.128 08:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:20:50.387 00:20:50.387 real 0m39.314s 00:20:50.387 user 1m15.895s 00:20:50.387 sys 0m10.258s 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:50.387 ************************************ 00:20:50.387 END TEST nvmf_digest 00:20:50.387 ************************************ 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.387 ************************************ 00:20:50.387 START TEST nvmf_host_multipath 00:20:50.387 ************************************ 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:50.387 * Looking for test storage... 00:20:50.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:20:50.387 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:50.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.647 --rc genhtml_branch_coverage=1 00:20:50.647 --rc genhtml_function_coverage=1 00:20:50.647 --rc genhtml_legend=1 00:20:50.647 --rc geninfo_all_blocks=1 00:20:50.647 --rc geninfo_unexecuted_blocks=1 00:20:50.647 00:20:50.647 ' 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:50.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.647 --rc genhtml_branch_coverage=1 00:20:50.647 --rc genhtml_function_coverage=1 00:20:50.647 --rc genhtml_legend=1 00:20:50.647 --rc geninfo_all_blocks=1 00:20:50.647 --rc geninfo_unexecuted_blocks=1 00:20:50.647 00:20:50.647 ' 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:50.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.647 --rc genhtml_branch_coverage=1 00:20:50.647 --rc genhtml_function_coverage=1 00:20:50.647 --rc genhtml_legend=1 00:20:50.647 --rc geninfo_all_blocks=1 00:20:50.647 --rc geninfo_unexecuted_blocks=1 00:20:50.647 00:20:50.647 ' 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:50.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.647 --rc genhtml_branch_coverage=1 00:20:50.647 --rc genhtml_function_coverage=1 00:20:50.647 --rc genhtml_legend=1 00:20:50.647 --rc geninfo_all_blocks=1 00:20:50.647 --rc geninfo_unexecuted_blocks=1 00:20:50.647 00:20:50.647 ' 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.647 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:50.648 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:50.648 Cannot find device "nvmf_init_br" 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:50.648 Cannot find device "nvmf_init_br2" 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:50.648 Cannot find device "nvmf_tgt_br" 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.648 Cannot find device "nvmf_tgt_br2" 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:50.648 Cannot find device "nvmf_init_br" 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:50.648 Cannot find device "nvmf_init_br2" 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:50.648 Cannot find device "nvmf_tgt_br" 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:50.648 Cannot find device "nvmf_tgt_br2" 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:50.648 Cannot find device "nvmf_br" 00:20:50.648 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:50.649 Cannot find device "nvmf_init_if" 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:50.649 Cannot find device "nvmf_init_if2" 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:50.649 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:50.947 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:50.947 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:20:50.947 00:20:50.947 --- 10.0.0.3 ping statistics --- 00:20:50.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.947 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:50.947 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:50.947 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:20:50.947 00:20:50.947 --- 10.0.0.4 ping statistics --- 00:20:50.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.947 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:50.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:50.947 00:20:50.947 --- 10.0.0.1 ping statistics --- 00:20:50.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.947 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:50.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:20:50.947 00:20:50.947 --- 10.0.0.2 ping statistics --- 00:20:50.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.947 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=81355 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 81355 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81355 ']' 00:20:50.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.947 08:54:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:51.207 [2024-11-20 08:54:21.886875] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:51.207 [2024-11-20 08:54:21.887015] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.207 [2024-11-20 08:54:22.042068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:51.207 [2024-11-20 08:54:22.117466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.207 [2024-11-20 08:54:22.117729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.207 [2024-11-20 08:54:22.117964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.207 [2024-11-20 08:54:22.118148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.207 [2024-11-20 08:54:22.118195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.207 [2024-11-20 08:54:22.119826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.207 [2024-11-20 08:54:22.119831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.466 [2024-11-20 08:54:22.196132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:51.466 08:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.466 08:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:51.466 08:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.466 08:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.466 08:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:51.466 08:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.466 08:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81355 00:20:51.466 08:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:51.725 [2024-11-20 08:54:22.628872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.984 08:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:52.243 Malloc0 00:20:52.243 08:54:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:52.501 08:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:52.760 08:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:53.019 [2024-11-20 08:54:23.799970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:53.019 08:54:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:53.278 [2024-11-20 08:54:24.076126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:53.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.278 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81408 00:20:53.278 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:53.278 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81408 /var/tmp/bdevperf.sock 00:20:53.278 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81408 ']' 00:20:53.278 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:53.278 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.278 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.278 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.278 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.278 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:53.846 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.846 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:53.846 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:54.105 08:54:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:54.363 Nvme0n1 00:20:54.363 08:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:54.930 Nvme0n1 00:20:54.930 08:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:54.930 08:54:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:55.865 08:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:55.865 08:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:56.124 08:54:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:56.382 08:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:56.382 08:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81442 00:20:56.382 08:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:56.383 08:54:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81355 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:02.982 Attaching 4 probes... 00:21:02.982 @path[10.0.0.3, 4421]: 13263 00:21:02.982 @path[10.0.0.3, 4421]: 13582 00:21:02.982 @path[10.0.0.3, 4421]: 13530 00:21:02.982 @path[10.0.0.3, 4421]: 13389 00:21:02.982 @path[10.0.0.3, 4421]: 13511 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81442 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:02.982 08:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:03.241 08:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:03.241 08:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81560 00:21:03.241 08:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81355 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:03.241 08:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:09.820 Attaching 4 probes... 00:21:09.820 @path[10.0.0.3, 4420]: 13015 00:21:09.820 @path[10.0.0.3, 4420]: 13339 00:21:09.820 @path[10.0.0.3, 4420]: 13159 00:21:09.820 @path[10.0.0.3, 4420]: 12966 00:21:09.820 @path[10.0.0.3, 4420]: 15077 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81560 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:09.820 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:10.079 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:10.079 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81355 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:10.079 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81678 00:21:10.079 08:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:16.647 08:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:16.647 08:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:16.647 Attaching 4 probes... 00:21:16.647 @path[10.0.0.3, 4421]: 13634 00:21:16.647 @path[10.0.0.3, 4421]: 17488 00:21:16.647 @path[10.0.0.3, 4421]: 17549 00:21:16.647 @path[10.0.0.3, 4421]: 17664 00:21:16.647 @path[10.0.0.3, 4421]: 17716 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81678 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:16.647 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:16.906 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:17.165 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:17.165 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81791 00:21:17.165 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81355 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:17.165 08:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:23.729 08:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:23.729 08:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:23.729 Attaching 4 probes... 00:21:23.729 00:21:23.729 00:21:23.729 00:21:23.729 00:21:23.729 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81791 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:23.729 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:23.986 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:23.986 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81909 00:21:23.986 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:23.986 08:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81355 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:30.615 08:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:30.615 08:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:30.615 Attaching 4 probes... 00:21:30.615 @path[10.0.0.3, 4421]: 17105 00:21:30.615 @path[10.0.0.3, 4421]: 17414 00:21:30.615 @path[10.0.0.3, 4421]: 17464 00:21:30.615 @path[10.0.0.3, 4421]: 17437 00:21:30.615 @path[10.0.0.3, 4421]: 17285 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81909 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:30.615 08:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:31.552 08:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:31.552 08:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82027 00:21:31.552 08:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81355 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:31.552 08:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:38.121 Attaching 4 probes... 00:21:38.121 @path[10.0.0.3, 4420]: 16804 00:21:38.121 @path[10.0.0.3, 4420]: 17261 00:21:38.121 @path[10.0.0.3, 4420]: 17346 00:21:38.121 @path[10.0.0.3, 4420]: 17351 00:21:38.121 @path[10.0.0.3, 4420]: 17217 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82027 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:38.121 08:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:38.121 [2024-11-20 08:55:09.015958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:38.380 08:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:38.638 08:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:45.251 08:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:45.251 08:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82207 00:21:45.251 08:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:45.251 08:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81355 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:50.521 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:50.521 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:50.780 Attaching 4 probes... 00:21:50.780 @path[10.0.0.3, 4421]: 16586 00:21:50.780 @path[10.0.0.3, 4421]: 15338 00:21:50.780 @path[10.0.0.3, 4421]: 17094 00:21:50.780 @path[10.0.0.3, 4421]: 16524 00:21:50.780 @path[10.0.0.3, 4421]: 17098 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82207 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81408 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81408 ']' 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81408 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81408 00:21:50.780 killing process with pid 81408 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81408' 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81408 00:21:50.780 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81408 00:21:50.780 { 00:21:50.780 "results": [ 00:21:50.780 { 00:21:50.780 "job": "Nvme0n1", 00:21:50.781 "core_mask": "0x4", 00:21:50.781 "workload": "verify", 00:21:50.781 "status": "terminated", 00:21:50.781 "verify_range": { 00:21:50.781 "start": 0, 00:21:50.781 "length": 16384 00:21:50.781 }, 00:21:50.781 "queue_depth": 128, 00:21:50.781 "io_size": 4096, 00:21:50.781 "runtime": 56.022769, 00:21:50.781 "iops": 6913.1713214675265, 00:21:50.781 "mibps": 27.004575474482525, 00:21:50.781 "io_failed": 0, 00:21:50.781 "io_timeout": 0, 00:21:50.781 "avg_latency_us": 18483.0059499395, 00:21:50.781 "min_latency_us": 179.66545454545454, 00:21:50.781 "max_latency_us": 7046430.72 00:21:50.781 } 00:21:50.781 ], 00:21:50.781 "core_count": 1 00:21:50.781 } 00:21:51.356 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81408 00:21:51.356 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:51.356 [2024-11-20 08:54:24.184895] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:51.356 [2024-11-20 08:54:24.185830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81408 ] 00:21:51.356 [2024-11-20 08:54:24.340886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.356 [2024-11-20 08:54:24.413986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.356 [2024-11-20 08:54:24.483901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:51.356 Running I/O for 90 seconds... 00:21:51.356 6932.00 IOPS, 27.08 MiB/s [2024-11-20T08:55:22.271Z] 6865.50 IOPS, 26.82 MiB/s [2024-11-20T08:55:22.271Z] 6838.33 IOPS, 26.71 MiB/s [2024-11-20T08:55:22.271Z] 6825.00 IOPS, 26.66 MiB/s [2024-11-20T08:55:22.271Z] 6816.60 IOPS, 26.63 MiB/s [2024-11-20T08:55:22.271Z] 6790.00 IOPS, 26.52 MiB/s [2024-11-20T08:55:22.271Z] 6789.14 IOPS, 26.52 MiB/s [2024-11-20T08:55:22.271Z] 6772.50 IOPS, 26.46 MiB/s [2024-11-20T08:55:22.271Z] [2024-11-20 08:54:34.078495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.356 [2024-11-20 08:54:34.078579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.078663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.078694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.078721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.078737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.078761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.078776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.078813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.078842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.078878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.078902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.078925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.078940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.078968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.078983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.356 [2024-11-20 08:54:34.079893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:51.356 [2024-11-20 08:54:34.079915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.079930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.079951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.079967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.080978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.080994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.081016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.081031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.081053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.081078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.081101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.081117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.081138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.081153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:51.357 [2024-11-20 08:54:34.081175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.357 [2024-11-20 08:54:34.081190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.081961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.081979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.082002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.082025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.082046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.082061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.082095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.082111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.082133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.082148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.082169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.082184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.082205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.358 [2024-11-20 08:54:34.082220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:51.358 [2024-11-20 08:54:34.082242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.082257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.082283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.082299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.082331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.082347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.082368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.082384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.082405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.082420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.082442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.082457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.082478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.082493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.082515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.082530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.082552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.082580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.084722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.084755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.084784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.084814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.084840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.084856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.084878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.084893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.084914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.084929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.084951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.084966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.084988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.085003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.085025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.085041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.085062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.085077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.085099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.085118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.085139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.085154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.085176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.085202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.085226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.085241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.085263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.085279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.085301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.359 [2024-11-20 08:54:34.085316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.085337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.359 [2024-11-20 08:54:34.085359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.085381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.359 [2024-11-20 08:54:34.085397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:51.359 [2024-11-20 08:54:34.085418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.359 [2024-11-20 08:54:34.085433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.360 [2024-11-20 08:54:34.085922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:34.085959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:34.085998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:34.086024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:51.360 6747.78 IOPS, 26.36 MiB/s [2024-11-20T08:55:22.275Z] 6738.60 IOPS, 26.32 MiB/s [2024-11-20T08:55:22.275Z] 6732.45 IOPS, 26.30 MiB/s [2024-11-20T08:55:22.275Z] 6724.75 IOPS, 26.27 MiB/s [2024-11-20T08:55:22.275Z] 6699.85 IOPS, 26.17 MiB/s [2024-11-20T08:55:22.275Z] 6774.07 IOPS, 26.46 MiB/s [2024-11-20T08:55:22.275Z] 6912.33 IOPS, 27.00 MiB/s [2024-11-20T08:55:22.275Z] [2024-11-20 08:54:40.707734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:40.707818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:40.707885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:40.707907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:40.707931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:40.707947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:40.707968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:40.708018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:40.708042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:40.708058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:40.708079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:40.708094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:40.708115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:40.708130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:40.708151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:40.708166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:40.708187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:40.708202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:40.708223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:40.708237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:51.360 [2024-11-20 08:54:40.708258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.360 [2024-11-20 08:54:40.708273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.361 [2024-11-20 08:54:40.708308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.361 [2024-11-20 08:54:40.708344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.361 [2024-11-20 08:54:40.708380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.361 [2024-11-20 08:54:40.708415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.361 [2024-11-20 08:54:40.708460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.708971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.708986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.709008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.709023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.709044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.709059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.709081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.361 [2024-11-20 08:54:40.709097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.709282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.361 [2024-11-20 08:54:40.709306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.709330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.361 [2024-11-20 08:54:40.709346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.709369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.361 [2024-11-20 08:54:40.709385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:51.361 [2024-11-20 08:54:40.709406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.709422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.709458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.709495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.709531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.709569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.362 [2024-11-20 08:54:40.709616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.362 [2024-11-20 08:54:40.709653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.362 [2024-11-20 08:54:40.709690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.362 [2024-11-20 08:54:40.709727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.362 [2024-11-20 08:54:40.709764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.362 [2024-11-20 08:54:40.709814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.362 [2024-11-20 08:54:40.709856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.362 [2024-11-20 08:54:40.709893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.709930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.709968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.709990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.710005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.710027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.710042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.710072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.710088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.710110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.710125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.710146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.710161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.710183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.710198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.710229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.710244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:51.362 [2024-11-20 08:54:40.710266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.362 [2024-11-20 08:54:40.710281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.710812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.363 [2024-11-20 08:54:40.710861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.363 [2024-11-20 08:54:40.710899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.363 [2024-11-20 08:54:40.710935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.363 [2024-11-20 08:54:40.710972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.710994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.363 [2024-11-20 08:54:40.711016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.711039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.363 [2024-11-20 08:54:40.711054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.711076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.363 [2024-11-20 08:54:40.711091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.711113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.363 [2024-11-20 08:54:40.711128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.711155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.711171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.711194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.711210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.711231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.711247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.711268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.711283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.711305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.711320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.711341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.711356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.711378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.711399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:51.363 [2024-11-20 08:54:40.711421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.363 [2024-11-20 08:54:40.711436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.711472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.711517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.711555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.711592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.711628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.711665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.711701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.711738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.364 [2024-11-20 08:54:40.711779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.364 [2024-11-20 08:54:40.711872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.364 [2024-11-20 08:54:40.711939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.711966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.364 [2024-11-20 08:54:40.711993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.712034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.364 [2024-11-20 08:54:40.712062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.712117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.364 [2024-11-20 08:54:40.712146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.712171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.364 [2024-11-20 08:54:40.712187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.712967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.364 [2024-11-20 08:54:40.712997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.364 [2024-11-20 08:54:40.713848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:51.364 [2024-11-20 08:54:40.713889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:40.713905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:40.713934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:40.713949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:51.365 6500.81 IOPS, 25.39 MiB/s [2024-11-20T08:55:22.280Z] 6616.12 IOPS, 25.84 MiB/s [2024-11-20T08:55:22.280Z] 6735.67 IOPS, 26.31 MiB/s [2024-11-20T08:55:22.280Z] 6843.89 IOPS, 26.73 MiB/s [2024-11-20T08:55:22.280Z] 6943.30 IOPS, 27.12 MiB/s [2024-11-20T08:55:22.280Z] 7034.38 IOPS, 27.48 MiB/s [2024-11-20T08:55:22.280Z] 7119.73 IOPS, 27.81 MiB/s [2024-11-20T08:55:22.280Z] [2024-11-20 08:54:47.891356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.891423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.891546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.891585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.891621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.891658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.891694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.891730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.891766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.891802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.891859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.891895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.891934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.891956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.891971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.892019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.892055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.892092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.365 [2024-11-20 08:54:47.892973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.892996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.893011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.893036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.893052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.893074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.893090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:51.365 [2024-11-20 08:54:47.893112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.365 [2024-11-20 08:54:47.893127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.893586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.893638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.893676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.893723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.893761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.893809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.893851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.893890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.893927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.893965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.893987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.894002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.894040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.894077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.894115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.894152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.894202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.366 [2024-11-20 08:54:47.894240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.894279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.894317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.894356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.894394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.894431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.894469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.894507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.894544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.894581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.366 [2024-11-20 08:54:47.894619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:51.366 [2024-11-20 08:54:47.894641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.894662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.894686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.894701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.894724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.894739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.894761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.894776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.894809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.894827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.894850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.894866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.894892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.894909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.894932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.894947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.894970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.894986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.895528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.895574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.895611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.895656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.895694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.895731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.895769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.367 [2024-11-20 08:54:47.895819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:51.367 [2024-11-20 08:54:47.895846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.367 [2024-11-20 08:54:47.895863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.895885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.368 [2024-11-20 08:54:47.895901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.895923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.368 [2024-11-20 08:54:47.895938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.895960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.368 [2024-11-20 08:54:47.895975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.895997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.368 [2024-11-20 08:54:47.896012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.368 [2024-11-20 08:54:47.896050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.368 [2024-11-20 08:54:47.896087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.368 [2024-11-20 08:54:47.896133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:54:47.896736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:54:47.896751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:51.368 6894.70 IOPS, 26.93 MiB/s [2024-11-20T08:55:22.283Z] 6607.42 IOPS, 25.81 MiB/s [2024-11-20T08:55:22.283Z] 6343.12 IOPS, 24.78 MiB/s [2024-11-20T08:55:22.283Z] 6099.15 IOPS, 23.82 MiB/s [2024-11-20T08:55:22.283Z] 5873.26 IOPS, 22.94 MiB/s [2024-11-20T08:55:22.283Z] 5663.50 IOPS, 22.12 MiB/s [2024-11-20T08:55:22.283Z] 5468.21 IOPS, 21.36 MiB/s [2024-11-20T08:55:22.283Z] 5508.10 IOPS, 21.52 MiB/s [2024-11-20T08:55:22.283Z] 5611.71 IOPS, 21.92 MiB/s [2024-11-20T08:55:22.283Z] 5708.72 IOPS, 22.30 MiB/s [2024-11-20T08:55:22.283Z] 5800.52 IOPS, 22.66 MiB/s [2024-11-20T08:55:22.283Z] 5884.91 IOPS, 22.99 MiB/s [2024-11-20T08:55:22.283Z] 5963.06 IOPS, 23.29 MiB/s [2024-11-20T08:55:22.283Z] [2024-11-20 08:55:01.401045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:55:01.401144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:55:01.401194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:55:01.401227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:55:01.401265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:55:01.401294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:55:01.401324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:55:01.401388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:55:01.401419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:55:01.401449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.368 [2024-11-20 08:55:01.401478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.368 [2024-11-20 08:55:01.401508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.368 [2024-11-20 08:55:01.401537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.368 [2024-11-20 08:55:01.401566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.368 [2024-11-20 08:55:01.401581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.401979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.401994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.369 [2024-11-20 08:55:01.402717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.402747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.369 [2024-11-20 08:55:01.402776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.369 [2024-11-20 08:55:01.402792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.402818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.402834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.402848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.402864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.402878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.402893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.402907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.402922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.402936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.402958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.402973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.402990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.370 [2024-11-20 08:55:01.403959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.403974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.403988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.404010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.370 [2024-11-20 08:55:01.404024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.370 [2024-11-20 08:55:01.404039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.371 [2024-11-20 08:55:01.404649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.404679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.404709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.404738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.404768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.404808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.404841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.404875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.404905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.404946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.404975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.404990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.405004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.405025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.405039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.405055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.405068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.405084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.371 [2024-11-20 08:55:01.405097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.371 [2024-11-20 08:55:01.405113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.372 [2024-11-20 08:55:01.405126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.372 [2024-11-20 08:55:01.405190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:51.372 [2024-11-20 08:55:01.405206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:51.372 [2024-11-20 08:55:01.405218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15312 len:8 PRP1 0x0 PRP2 0x0 00:21:51.372 [2024-11-20 08:55:01.405231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.372 [2024-11-20 08:55:01.405413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.372 [2024-11-20 08:55:01.405443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.372 [2024-11-20 08:55:01.405459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.372 [2024-11-20 08:55:01.405473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.372 [2024-11-20 08:55:01.405487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.372 [2024-11-20 08:55:01.405500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.372 [2024-11-20 08:55:01.405514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.372 [2024-11-20 08:55:01.405528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.372 [2024-11-20 08:55:01.405561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e71d0 is same with the state(6) to be set 00:21:51.372 [2024-11-20 08:55:01.406744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:51.372 [2024-11-20 08:55:01.406814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e71d0 (9): Bad file descriptor 00:21:51.372 [2024-11-20 08:55:01.407210] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.372 [2024-11-20 08:55:01.407246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e71d0 with addr=10.0.0.3, port=4421 00:21:51.372 [2024-11-20 08:55:01.407264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e71d0 is same with the state(6) to be set 00:21:51.372 [2024-11-20 08:55:01.407315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e71d0 (9): Bad file descriptor 00:21:51.372 [2024-11-20 08:55:01.407381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:51.372 [2024-11-20 08:55:01.407402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:51.372 [2024-11-20 08:55:01.407431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:51.372 [2024-11-20 08:55:01.407456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:51.372 [2024-11-20 08:55:01.407473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:51.372 6030.89 IOPS, 23.56 MiB/s [2024-11-20T08:55:22.287Z] 6098.05 IOPS, 23.82 MiB/s [2024-11-20T08:55:22.287Z] 6160.11 IOPS, 24.06 MiB/s [2024-11-20T08:55:22.287Z] 6221.23 IOPS, 24.30 MiB/s [2024-11-20T08:55:22.287Z] 6283.90 IOPS, 24.55 MiB/s [2024-11-20T08:55:22.287Z] 6341.56 IOPS, 24.77 MiB/s [2024-11-20T08:55:22.287Z] 6396.86 IOPS, 24.99 MiB/s [2024-11-20T08:55:22.287Z] 6448.47 IOPS, 25.19 MiB/s [2024-11-20T08:55:22.287Z] 6496.82 IOPS, 25.38 MiB/s [2024-11-20T08:55:22.287Z] 6542.13 IOPS, 25.56 MiB/s [2024-11-20T08:55:22.287Z] [2024-11-20 08:55:11.483987] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:51.372 6587.65 IOPS, 25.73 MiB/s [2024-11-20T08:55:22.287Z] 6630.79 IOPS, 25.90 MiB/s [2024-11-20T08:55:22.287Z] 6671.31 IOPS, 26.06 MiB/s [2024-11-20T08:55:22.287Z] 6709.37 IOPS, 26.21 MiB/s [2024-11-20T08:55:22.287Z] 6744.94 IOPS, 26.35 MiB/s [2024-11-20T08:55:22.287Z] 6778.18 IOPS, 26.48 MiB/s [2024-11-20T08:55:22.287Z] 6793.21 IOPS, 26.54 MiB/s [2024-11-20T08:55:22.287Z] 6826.70 IOPS, 26.67 MiB/s [2024-11-20T08:55:22.287Z] 6853.46 IOPS, 26.77 MiB/s [2024-11-20T08:55:22.287Z] 6884.35 IOPS, 26.89 MiB/s [2024-11-20T08:55:22.287Z] 6914.41 IOPS, 27.01 MiB/s [2024-11-20T08:55:22.287Z] Received shutdown signal, test time was about 56.023617 seconds 00:21:51.372 00:21:51.372 Latency(us) 00:21:51.372 [2024-11-20T08:55:22.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.372 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:51.372 Verification LBA range: start 0x0 length 0x4000 00:21:51.372 Nvme0n1 : 56.02 6913.17 27.00 0.00 0.00 18483.01 179.67 7046430.72 00:21:51.372 [2024-11-20T08:55:22.287Z] =================================================================================================================== 00:21:51.372 [2024-11-20T08:55:22.287Z] Total : 6913.17 27.00 0.00 0.00 18483.01 179.67 7046430.72 00:21:51.372 08:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.372 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:51.372 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:51.372 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:51.372 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.372 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.631 rmmod nvme_tcp 00:21:51.631 rmmod nvme_fabrics 00:21:51.631 rmmod nvme_keyring 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 81355 ']' 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 81355 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81355 ']' 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81355 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81355 00:21:51.631 killing process with pid 81355 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81355' 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81355 00:21:51.631 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81355 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:51.890 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:21:52.200 00:21:52.200 real 1m1.733s 00:21:52.200 user 2m51.419s 00:21:52.200 sys 0m18.600s 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:52.200 ************************************ 00:21:52.200 END TEST nvmf_host_multipath 00:21:52.200 ************************************ 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.200 ************************************ 00:21:52.200 START TEST nvmf_timeout 00:21:52.200 ************************************ 00:21:52.200 08:55:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:52.200 * Looking for test storage... 00:21:52.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:52.200 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:52.200 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:21:52.200 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:52.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.461 --rc genhtml_branch_coverage=1 00:21:52.461 --rc genhtml_function_coverage=1 00:21:52.461 --rc genhtml_legend=1 00:21:52.461 --rc geninfo_all_blocks=1 00:21:52.461 --rc geninfo_unexecuted_blocks=1 00:21:52.461 00:21:52.461 ' 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:52.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.461 --rc genhtml_branch_coverage=1 00:21:52.461 --rc genhtml_function_coverage=1 00:21:52.461 --rc genhtml_legend=1 00:21:52.461 --rc geninfo_all_blocks=1 00:21:52.461 --rc geninfo_unexecuted_blocks=1 00:21:52.461 00:21:52.461 ' 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:52.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.461 --rc genhtml_branch_coverage=1 00:21:52.461 --rc genhtml_function_coverage=1 00:21:52.461 --rc genhtml_legend=1 00:21:52.461 --rc geninfo_all_blocks=1 00:21:52.461 --rc geninfo_unexecuted_blocks=1 00:21:52.461 00:21:52.461 ' 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:52.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.461 --rc genhtml_branch_coverage=1 00:21:52.461 --rc genhtml_function_coverage=1 00:21:52.461 --rc genhtml_legend=1 00:21:52.461 --rc geninfo_all_blocks=1 00:21:52.461 --rc geninfo_unexecuted_blocks=1 00:21:52.461 00:21:52.461 ' 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.461 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.462 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:52.462 Cannot find device "nvmf_init_br" 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:52.462 Cannot find device "nvmf_init_br2" 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:52.462 Cannot find device "nvmf_tgt_br" 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:52.462 Cannot find device "nvmf_tgt_br2" 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:52.462 Cannot find device "nvmf_init_br" 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:52.462 Cannot find device "nvmf_init_br2" 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:52.462 Cannot find device "nvmf_tgt_br" 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:52.462 Cannot find device "nvmf_tgt_br2" 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:52.462 Cannot find device "nvmf_br" 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:52.462 Cannot find device "nvmf_init_if" 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:52.462 Cannot find device "nvmf_init_if2" 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:52.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:52.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:52.462 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:52.721 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:52.721 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:52.721 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:52.721 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:52.721 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:52.722 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:52.722 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:21:52.722 00:21:52.722 --- 10.0.0.3 ping statistics --- 00:21:52.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.722 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:52.722 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:52.722 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:21:52.722 00:21:52.722 --- 10.0.0.4 ping statistics --- 00:21:52.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.722 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:52.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:52.722 00:21:52.722 --- 10.0.0.1 ping statistics --- 00:21:52.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.722 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:52.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:21:52.722 00:21:52.722 --- 10.0.0.2 ping statistics --- 00:21:52.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.722 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82568 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82568 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82568 ']' 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.722 08:55:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:52.981 [2024-11-20 08:55:23.692346] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:52.981 [2024-11-20 08:55:23.692462] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.981 [2024-11-20 08:55:23.846633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:53.240 [2024-11-20 08:55:23.929925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.240 [2024-11-20 08:55:23.930011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.240 [2024-11-20 08:55:23.930039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.240 [2024-11-20 08:55:23.930050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.240 [2024-11-20 08:55:23.930059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.240 [2024-11-20 08:55:23.931581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.240 [2024-11-20 08:55:23.931596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.240 [2024-11-20 08:55:24.006766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:54.176 08:55:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.176 08:55:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:54.176 08:55:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:54.176 08:55:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:54.176 08:55:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:54.176 08:55:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.176 08:55:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.176 08:55:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:54.176 [2024-11-20 08:55:25.055108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.176 08:55:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:54.744 Malloc0 00:21:54.744 08:55:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:54.744 08:55:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:55.311 08:55:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:55.311 [2024-11-20 08:55:26.190932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:55.311 08:55:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:55.311 08:55:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82617 00:21:55.311 08:55:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82617 /var/tmp/bdevperf.sock 00:21:55.311 08:55:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82617 ']' 00:21:55.311 08:55:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.311 08:55:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.311 08:55:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.311 08:55:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.311 08:55:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:55.570 [2024-11-20 08:55:26.257153] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:55.570 [2024-11-20 08:55:26.257254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82617 ] 00:21:55.570 [2024-11-20 08:55:26.407956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.829 [2024-11-20 08:55:26.492082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.829 [2024-11-20 08:55:26.564998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:56.396 08:55:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.396 08:55:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:56.396 08:55:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:57.009 08:55:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:57.278 NVMe0n1 00:21:57.278 08:55:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82642 00:21:57.278 08:55:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:57.278 08:55:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:57.278 Running I/O for 10 seconds... 00:21:58.215 08:55:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:58.476 6804.00 IOPS, 26.58 MiB/s [2024-11-20T08:55:29.391Z] [2024-11-20 08:55:29.213892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.213973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.476 [2024-11-20 08:55:29.214240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1db30 is same with the state(6) to be set 00:21:58.477 [2024-11-20 08:55:29.214999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.477 [2024-11-20 08:55:29.215031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.477 [2024-11-20 08:55:29.215055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.477 [2024-11-20 08:55:29.215066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.477 [2024-11-20 08:55:29.215079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.478 [2024-11-20 08:55:29.215939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.478 [2024-11-20 08:55:29.215949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.215960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.215970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.215981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.215990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.479 [2024-11-20 08:55:29.216792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.479 [2024-11-20 08:55:29.216811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.216823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.216833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.216851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.216861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.216872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.216882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.216893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.216903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.216920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.216929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.216940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.216950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.216961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.216975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.216986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.216995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.480 [2024-11-20 08:55:29.217435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.480 [2024-11-20 08:55:29.217455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.480 [2024-11-20 08:55:29.217476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.480 [2024-11-20 08:55:29.217496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.480 [2024-11-20 08:55:29.217516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.480 [2024-11-20 08:55:29.217544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.480 [2024-11-20 08:55:29.217564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.480 [2024-11-20 08:55:29.217584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.480 [2024-11-20 08:55:29.217611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.480 [2024-11-20 08:55:29.217631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.480 [2024-11-20 08:55:29.217651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.480 [2024-11-20 08:55:29.217662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.480 [2024-11-20 08:55:29.217671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.481 [2024-11-20 08:55:29.217682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.481 [2024-11-20 08:55:29.217691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.481 [2024-11-20 08:55:29.217702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.481 [2024-11-20 08:55:29.217711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.481 [2024-11-20 08:55:29.217722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.481 [2024-11-20 08:55:29.217731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.481 [2024-11-20 08:55:29.217742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:58.481 [2024-11-20 08:55:29.217751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.481 [2024-11-20 08:55:29.217762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.481 [2024-11-20 08:55:29.217771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.481 [2024-11-20 08:55:29.217781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbd1d0 is same with the state(6) to be set 00:21:58.481 [2024-11-20 08:55:29.217811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:58.481 [2024-11-20 08:55:29.217821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:58.481 [2024-11-20 08:55:29.217829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:21:58.481 [2024-11-20 08:55:29.217838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.481 [2024-11-20 08:55:29.218162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:58.481 [2024-11-20 08:55:29.218248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4fe50 (9): Bad file descriptor 00:21:58.481 [2024-11-20 08:55:29.218364] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:58.481 [2024-11-20 08:55:29.218385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4fe50 with addr=10.0.0.3, port=4420 00:21:58.481 [2024-11-20 08:55:29.218396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4fe50 is same with the state(6) to be set 00:21:58.481 [2024-11-20 08:55:29.218414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4fe50 (9): Bad file descriptor 00:21:58.481 [2024-11-20 08:55:29.218431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:58.481 [2024-11-20 08:55:29.218440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:58.481 [2024-11-20 08:55:29.218452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:58.481 [2024-11-20 08:55:29.218469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:58.481 [2024-11-20 08:55:29.218481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:58.481 08:55:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:00.369 3858.00 IOPS, 15.07 MiB/s [2024-11-20T08:55:31.284Z] 2572.00 IOPS, 10.05 MiB/s [2024-11-20T08:55:31.284Z] [2024-11-20 08:55:31.218810] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.369 [2024-11-20 08:55:31.218883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4fe50 with addr=10.0.0.3, port=4420 00:22:00.369 [2024-11-20 08:55:31.218900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4fe50 is same with the state(6) to be set 00:22:00.369 [2024-11-20 08:55:31.218929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4fe50 (9): Bad file descriptor 00:22:00.369 [2024-11-20 08:55:31.218978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:00.369 [2024-11-20 08:55:31.218990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:00.369 [2024-11-20 08:55:31.219019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:00.369 [2024-11-20 08:55:31.219031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:00.369 [2024-11-20 08:55:31.219044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:00.369 08:55:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:00.369 08:55:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.369 08:55:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:00.935 08:55:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:00.935 08:55:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:00.935 08:55:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:00.935 08:55:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:01.193 08:55:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:01.193 08:55:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:02.569 1929.00 IOPS, 7.54 MiB/s [2024-11-20T08:55:33.484Z] 1543.20 IOPS, 6.03 MiB/s [2024-11-20T08:55:33.484Z] [2024-11-20 08:55:33.219360] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.569 [2024-11-20 08:55:33.219450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4fe50 with addr=10.0.0.3, port=4420 00:22:02.569 [2024-11-20 08:55:33.219468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4fe50 is same with the state(6) to be set 00:22:02.569 [2024-11-20 08:55:33.219501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4fe50 (9): Bad file descriptor 00:22:02.569 [2024-11-20 08:55:33.219522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:02.569 [2024-11-20 08:55:33.219533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:02.569 [2024-11-20 08:55:33.219545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:02.569 [2024-11-20 08:55:33.219558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:02.569 [2024-11-20 08:55:33.219571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:04.437 1286.00 IOPS, 5.02 MiB/s [2024-11-20T08:55:35.352Z] 1102.29 IOPS, 4.31 MiB/s [2024-11-20T08:55:35.352Z] [2024-11-20 08:55:35.219760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:04.437 [2024-11-20 08:55:35.219870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:04.437 [2024-11-20 08:55:35.219887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:04.437 [2024-11-20 08:55:35.219899] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:22:04.437 [2024-11-20 08:55:35.219914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:05.372 964.50 IOPS, 3.77 MiB/s 00:22:05.372 Latency(us) 00:22:05.372 [2024-11-20T08:55:36.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.372 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:05.372 Verification LBA range: start 0x0 length 0x4000 00:22:05.372 NVMe0n1 : 8.15 946.98 3.70 15.71 0.00 132859.75 3961.95 7046430.72 00:22:05.372 [2024-11-20T08:55:36.287Z] =================================================================================================================== 00:22:05.372 [2024-11-20T08:55:36.287Z] Total : 946.98 3.70 15.71 0.00 132859.75 3961.95 7046430.72 00:22:05.372 { 00:22:05.372 "results": [ 00:22:05.372 { 00:22:05.372 "job": "NVMe0n1", 00:22:05.372 "core_mask": "0x4", 00:22:05.372 "workload": "verify", 00:22:05.372 "status": "finished", 00:22:05.372 "verify_range": { 00:22:05.372 "start": 0, 00:22:05.372 "length": 16384 00:22:05.372 }, 00:22:05.372 "queue_depth": 128, 00:22:05.372 "io_size": 4096, 00:22:05.372 "runtime": 8.148044, 00:22:05.372 "iops": 946.9757404353732, 00:22:05.372 "mibps": 3.6991239860756764, 00:22:05.372 "io_failed": 128, 00:22:05.372 "io_timeout": 0, 00:22:05.372 "avg_latency_us": 132859.75180751935, 00:22:05.372 "min_latency_us": 3961.949090909091, 00:22:05.372 "max_latency_us": 7046430.72 00:22:05.372 } 00:22:05.372 ], 00:22:05.372 "core_count": 1 00:22:05.372 } 00:22:06.308 08:55:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:06.308 08:55:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:06.308 08:55:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:06.308 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:06.308 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:06.308 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:06.308 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82642 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82617 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82617 ']' 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82617 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82617 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82617' 00:22:06.567 killing process with pid 82617 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82617 00:22:06.567 Received shutdown signal, test time was about 9.400148 seconds 00:22:06.567 00:22:06.567 Latency(us) 00:22:06.567 [2024-11-20T08:55:37.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.567 [2024-11-20T08:55:37.482Z] =================================================================================================================== 00:22:06.567 [2024-11-20T08:55:37.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.567 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82617 00:22:06.826 08:55:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:07.085 [2024-11-20 08:55:37.979345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:07.347 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82766 00:22:07.347 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:07.347 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82766 /var/tmp/bdevperf.sock 00:22:07.347 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82766 ']' 00:22:07.347 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.347 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.347 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.347 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.347 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:07.347 [2024-11-20 08:55:38.048095] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:07.347 [2024-11-20 08:55:38.048185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82766 ] 00:22:07.347 [2024-11-20 08:55:38.191839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.617 [2024-11-20 08:55:38.269073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.617 [2024-11-20 08:55:38.339452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:07.617 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.617 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:07.617 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:07.876 08:55:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:08.135 NVMe0n1 00:22:08.135 08:55:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82782 00:22:08.135 08:55:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:08.135 08:55:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:08.393 Running I/O for 10 seconds... 00:22:09.329 08:55:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:09.591 8566.00 IOPS, 33.46 MiB/s [2024-11-20T08:55:40.506Z] [2024-11-20 08:55:40.254636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.591 [2024-11-20 08:55:40.254709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.591 [2024-11-20 08:55:40.254737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.591 [2024-11-20 08:55:40.254748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.254760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.254770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.254783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.254814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.254828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.254838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.254850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.254859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.254870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.254880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.254893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.254902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.254914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.254924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.254935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.254945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.254957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.254966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.254977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.254986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.254997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.592 [2024-11-20 08:55:40.255429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.255450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.255477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.255498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.255519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.255539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.255560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.592 [2024-11-20 08:55:40.255571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.592 [2024-11-20 08:55:40.255581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.255601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.255621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.255642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.255662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.255682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.255702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.255722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.255742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.255764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.593 [2024-11-20 08:55:40.255785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.593 [2024-11-20 08:55:40.255817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.593 [2024-11-20 08:55:40.255838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.593 [2024-11-20 08:55:40.255868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.593 [2024-11-20 08:55:40.255890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.593 [2024-11-20 08:55:40.255910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.593 [2024-11-20 08:55:40.255931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.593 [2024-11-20 08:55:40.255951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.255971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.255982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.255991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.593 [2024-11-20 08:55:40.256360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.593 [2024-11-20 08:55:40.256380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.593 [2024-11-20 08:55:40.256391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.593 [2024-11-20 08:55:40.256400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.256420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.256441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.256463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.256485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.256505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.256525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.594 [2024-11-20 08:55:40.256916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.256938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.256959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.256980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.256991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.257000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.257011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.257020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.257031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.257041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.257053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.594 [2024-11-20 08:55:40.257062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.257073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d71d0 is same with the state(6) to be set 00:22:09.594 [2024-11-20 08:55:40.257085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.594 [2024-11-20 08:55:40.257093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.594 [2024-11-20 08:55:40.257102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:22:09.594 [2024-11-20 08:55:40.257111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.257122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.594 [2024-11-20 08:55:40.257130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.594 [2024-11-20 08:55:40.257138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 PRP1 0x0 PRP2 0x0 00:22:09.594 [2024-11-20 08:55:40.257147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.257156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.594 [2024-11-20 08:55:40.257163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.594 [2024-11-20 08:55:40.257185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79944 len:8 PRP1 0x0 PRP2 0x0 00:22:09.594 [2024-11-20 08:55:40.257194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.257204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.594 [2024-11-20 08:55:40.257212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.594 [2024-11-20 08:55:40.257220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:22:09.594 [2024-11-20 08:55:40.257236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.594 [2024-11-20 08:55:40.257246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.594 [2024-11-20 08:55:40.257253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.594 [2024-11-20 08:55:40.257262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79968 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79976 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79984 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79992 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80008 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80016 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80024 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80032 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80048 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.257693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.257701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80056 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.257709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.257719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.273835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.273874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80064 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.273887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.273906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:09.595 [2024-11-20 08:55:40.273914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:09.595 [2024-11-20 08:55:40.273924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80072 len:8 PRP1 0x0 PRP2 0x0 00:22:09.595 [2024-11-20 08:55:40.273934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.274150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.595 [2024-11-20 08:55:40.274175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.274189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.595 [2024-11-20 08:55:40.274199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.274209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.595 [2024-11-20 08:55:40.274218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.274228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.595 [2024-11-20 08:55:40.274237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.595 [2024-11-20 08:55:40.274247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469e50 is same with the state(6) to be set 00:22:09.595 [2024-11-20 08:55:40.274503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:09.595 [2024-11-20 08:55:40.274546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1469e50 (9): Bad file descriptor 00:22:09.595 [2024-11-20 08:55:40.274691] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.595 [2024-11-20 08:55:40.274720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1469e50 with addr=10.0.0.3, port=4420 00:22:09.595 [2024-11-20 08:55:40.274732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469e50 is same with the state(6) to be set 00:22:09.595 [2024-11-20 08:55:40.274751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1469e50 (9): Bad file descriptor 00:22:09.595 [2024-11-20 08:55:40.274768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:09.595 [2024-11-20 08:55:40.274777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:09.595 [2024-11-20 08:55:40.274788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:09.595 [2024-11-20 08:55:40.274816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:09.595 [2024-11-20 08:55:40.274830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:09.595 08:55:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:10.532 4941.00 IOPS, 19.30 MiB/s [2024-11-20T08:55:41.447Z] [2024-11-20 08:55:41.274990] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.532 [2024-11-20 08:55:41.275068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1469e50 with addr=10.0.0.3, port=4420 00:22:10.532 [2024-11-20 08:55:41.275087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469e50 is same with the state(6) to be set 00:22:10.532 [2024-11-20 08:55:41.275116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1469e50 (9): Bad file descriptor 00:22:10.532 [2024-11-20 08:55:41.275138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:10.532 [2024-11-20 08:55:41.275150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:10.532 [2024-11-20 08:55:41.275161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:10.532 [2024-11-20 08:55:41.275174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:10.532 [2024-11-20 08:55:41.275187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:10.532 08:55:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:10.791 [2024-11-20 08:55:41.579818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:10.791 08:55:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82782 00:22:11.618 3294.00 IOPS, 12.87 MiB/s [2024-11-20T08:55:42.534Z] [2024-11-20 08:55:42.291010] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:13.497 2470.50 IOPS, 9.65 MiB/s [2024-11-20T08:55:45.349Z] 3314.20 IOPS, 12.95 MiB/s [2024-11-20T08:55:46.285Z] 4128.50 IOPS, 16.13 MiB/s [2024-11-20T08:55:47.222Z] 4746.71 IOPS, 18.54 MiB/s [2024-11-20T08:55:48.159Z] 5326.88 IOPS, 20.81 MiB/s [2024-11-20T08:55:49.536Z] 5789.44 IOPS, 22.62 MiB/s [2024-11-20T08:55:49.536Z] 6146.70 IOPS, 24.01 MiB/s 00:22:18.621 Latency(us) 00:22:18.621 [2024-11-20T08:55:49.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.621 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:18.621 Verification LBA range: start 0x0 length 0x4000 00:22:18.621 NVMe0n1 : 10.01 6152.04 24.03 0.00 0.00 20760.84 1593.72 3035150.89 00:22:18.621 [2024-11-20T08:55:49.536Z] =================================================================================================================== 00:22:18.621 [2024-11-20T08:55:49.536Z] Total : 6152.04 24.03 0.00 0.00 20760.84 1593.72 3035150.89 00:22:18.621 { 00:22:18.621 "results": [ 00:22:18.621 { 00:22:18.621 "job": "NVMe0n1", 00:22:18.621 "core_mask": "0x4", 00:22:18.621 "workload": "verify", 00:22:18.621 "status": "finished", 00:22:18.621 "verify_range": { 00:22:18.621 "start": 0, 00:22:18.621 "length": 16384 00:22:18.621 }, 00:22:18.621 "queue_depth": 128, 00:22:18.621 "io_size": 4096, 00:22:18.621 "runtime": 10.009193, 00:22:18.621 "iops": 6152.044425559583, 00:22:18.621 "mibps": 24.03142353734212, 00:22:18.621 "io_failed": 0, 00:22:18.621 "io_timeout": 0, 00:22:18.621 "avg_latency_us": 20760.8382667082, 00:22:18.621 "min_latency_us": 1593.7163636363637, 00:22:18.621 "max_latency_us": 3035150.8945454545 00:22:18.621 } 00:22:18.621 ], 00:22:18.621 "core_count": 1 00:22:18.621 } 00:22:18.621 08:55:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82888 00:22:18.621 08:55:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:18.621 08:55:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:18.621 Running I/O for 10 seconds... 00:22:19.563 08:55:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:19.563 6933.00 IOPS, 27.08 MiB/s [2024-11-20T08:55:50.478Z] [2024-11-20 08:55:50.429929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with [2024-11-20 08:55:50.429955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:22:19.563 id:0 cdw10:00000000 cdw11:00000000 00:22:19.563 [2024-11-20 08:55:50.430003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.563 [2024-11-20 08:55:50.430015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.563 [2024-11-20 08:55:50.430025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 08:55:50.430035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.563 the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.563 [2024-11-20 08:55:50.430054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.563 [2024-11-20 08:55:50.430064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.563 [2024-11-20 08:55:50.430073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.563 [2024-11-20 08:55:50.430083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469e50 is same [2024-11-20 08:55:50.430092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with with the state(6) to be set 00:22:19.563 the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.563 [2024-11-20 08:55:50.430309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.430954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1e5d0 is same with the state(6) to be set 00:22:19.564 [2024-11-20 08:55:50.431009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.564 [2024-11-20 08:55:50.431029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.564 [2024-11-20 08:55:50.431049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.564 [2024-11-20 08:55:50.431064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.564 [2024-11-20 08:55:50.431076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.564 [2024-11-20 08:55:50.431085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.564 [2024-11-20 08:55:50.431096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.564 [2024-11-20 08:55:50.431105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.564 [2024-11-20 08:55:50.431117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.564 [2024-11-20 08:55:50.431126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.564 [2024-11-20 08:55:50.431137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.565 [2024-11-20 08:55:50.431958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.565 [2024-11-20 08:55:50.431968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.431979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.431989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.566 [2024-11-20 08:55:50.432806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.566 [2024-11-20 08:55:50.432818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.432828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.432839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.432849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.432860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.432870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.432881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.432890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.432902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.432912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.432923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.432932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.432943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.432952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.432964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.432973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.432984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.432993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.567 [2024-11-20 08:55:50.433360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.567 [2024-11-20 08:55:50.433381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.567 [2024-11-20 08:55:50.433402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.567 [2024-11-20 08:55:50.433432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.567 [2024-11-20 08:55:50.433452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.567 [2024-11-20 08:55:50.433472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.567 [2024-11-20 08:55:50.433493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.567 [2024-11-20 08:55:50.433504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.567 [2024-11-20 08:55:50.433513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.568 [2024-11-20 08:55:50.433524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.568 [2024-11-20 08:55:50.433533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.568 [2024-11-20 08:55:50.433545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.568 [2024-11-20 08:55:50.433555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.568 [2024-11-20 08:55:50.433567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.568 [2024-11-20 08:55:50.433576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.568 [2024-11-20 08:55:50.433587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.568 [2024-11-20 08:55:50.433596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.568 [2024-11-20 08:55:50.433607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.568 [2024-11-20 08:55:50.433616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.568 [2024-11-20 08:55:50.433627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.568 [2024-11-20 08:55:50.433636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.568 [2024-11-20 08:55:50.433647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.568 [2024-11-20 08:55:50.433656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.568 [2024-11-20 08:55:50.433667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.568 [2024-11-20 08:55:50.433676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.568 [2024-11-20 08:55:50.433687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.568 [2024-11-20 08:55:50.433696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.568 [2024-11-20 08:55:50.433707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8290 is same with the state(6) to be set 00:22:19.568 [2024-11-20 08:55:50.433719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.568 [2024-11-20 08:55:50.433727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.568 [2024-11-20 08:55:50.433735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66720 len:8 PRP1 0x0 PRP2 0x0 00:22:19.568 [2024-11-20 08:55:50.433751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.568 [2024-11-20 08:55:50.434034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:19.568 [2024-11-20 08:55:50.434070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1469e50 (9): Bad file descriptor 00:22:19.568 [2024-11-20 08:55:50.434195] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.568 [2024-11-20 08:55:50.434227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1469e50 with addr=10.0.0.3, port=4420 00:22:19.568 [2024-11-20 08:55:50.434240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469e50 is same with the state(6) to be set 00:22:19.568 [2024-11-20 08:55:50.434259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1469e50 (9): Bad file descriptor 00:22:19.568 [2024-11-20 08:55:50.434276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:19.568 [2024-11-20 08:55:50.434286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:19.568 [2024-11-20 08:55:50.434298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:19.568 [2024-11-20 08:55:50.434310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:19.568 [2024-11-20 08:55:50.434320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:19.568 08:55:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:20.529 4114.00 IOPS, 16.07 MiB/s [2024-11-20T08:55:51.444Z] [2024-11-20 08:55:51.434472] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.529 [2024-11-20 08:55:51.434580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1469e50 with addr=10.0.0.3, port=4420 00:22:20.529 [2024-11-20 08:55:51.434615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469e50 is same with the state(6) to be set 00:22:20.529 [2024-11-20 08:55:51.434646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1469e50 (9): Bad file descriptor 00:22:20.529 [2024-11-20 08:55:51.434683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:20.529 [2024-11-20 08:55:51.434695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:20.529 [2024-11-20 08:55:51.434707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:20.529 [2024-11-20 08:55:51.434720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:20.529 [2024-11-20 08:55:51.434732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:21.725 2742.67 IOPS, 10.71 MiB/s [2024-11-20T08:55:52.640Z] [2024-11-20 08:55:52.434911] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.725 [2024-11-20 08:55:52.434996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1469e50 with addr=10.0.0.3, port=4420 00:22:21.726 [2024-11-20 08:55:52.435014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469e50 is same with the state(6) to be set 00:22:21.726 [2024-11-20 08:55:52.435044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1469e50 (9): Bad file descriptor 00:22:21.726 [2024-11-20 08:55:52.435077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:21.726 [2024-11-20 08:55:52.435090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:21.726 [2024-11-20 08:55:52.435101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:21.726 [2024-11-20 08:55:52.435114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:21.726 [2024-11-20 08:55:52.435126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:22.663 2057.00 IOPS, 8.04 MiB/s [2024-11-20T08:55:53.578Z] [2024-11-20 08:55:53.438923] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.663 [2024-11-20 08:55:53.438987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1469e50 with addr=10.0.0.3, port=4420 00:22:22.663 [2024-11-20 08:55:53.439005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469e50 is same with the state(6) to be set 00:22:22.663 [2024-11-20 08:55:53.439266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1469e50 (9): Bad file descriptor 00:22:22.663 [2024-11-20 08:55:53.439511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:22.663 [2024-11-20 08:55:53.439533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:22.663 [2024-11-20 08:55:53.439545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:22.663 [2024-11-20 08:55:53.439557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:22.663 [2024-11-20 08:55:53.439569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:22.663 08:55:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:22.922 [2024-11-20 08:55:53.743685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:22.922 08:55:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82888 00:22:23.749 1645.60 IOPS, 6.43 MiB/s [2024-11-20T08:55:54.664Z] [2024-11-20 08:55:54.467367] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:22:25.640 2566.83 IOPS, 10.03 MiB/s [2024-11-20T08:55:57.524Z] 3518.14 IOPS, 13.74 MiB/s [2024-11-20T08:55:58.461Z] 4230.38 IOPS, 16.52 MiB/s [2024-11-20T08:55:59.400Z] 4798.56 IOPS, 18.74 MiB/s [2024-11-20T08:55:59.400Z] 5159.10 IOPS, 20.15 MiB/s 00:22:28.485 Latency(us) 00:22:28.485 [2024-11-20T08:55:59.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.485 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:28.485 Verification LBA range: start 0x0 length 0x4000 00:22:28.485 NVMe0n1 : 10.01 5165.02 20.18 3684.49 0.00 14426.03 703.77 3019898.88 00:22:28.485 [2024-11-20T08:55:59.400Z] =================================================================================================================== 00:22:28.485 [2024-11-20T08:55:59.400Z] Total : 5165.02 20.18 3684.49 0.00 14426.03 0.00 3019898.88 00:22:28.485 { 00:22:28.485 "results": [ 00:22:28.485 { 00:22:28.485 "job": "NVMe0n1", 00:22:28.485 "core_mask": "0x4", 00:22:28.485 "workload": "verify", 00:22:28.485 "status": "finished", 00:22:28.485 "verify_range": { 00:22:28.485 "start": 0, 00:22:28.485 "length": 16384 00:22:28.485 }, 00:22:28.485 "queue_depth": 128, 00:22:28.485 "io_size": 4096, 00:22:28.485 "runtime": 10.013326, 00:22:28.485 "iops": 5165.017098214918, 00:22:28.485 "mibps": 20.175848039902025, 00:22:28.485 "io_failed": 36894, 00:22:28.485 "io_timeout": 0, 00:22:28.485 "avg_latency_us": 14426.031735052215, 00:22:28.485 "min_latency_us": 703.7672727272727, 00:22:28.485 "max_latency_us": 3019898.88 00:22:28.485 } 00:22:28.485 ], 00:22:28.485 "core_count": 1 00:22:28.485 } 00:22:28.485 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82766 00:22:28.485 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82766 ']' 00:22:28.485 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82766 00:22:28.485 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:28.485 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.485 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82766 00:22:28.485 killing process with pid 82766 00:22:28.485 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.485 00:22:28.485 Latency(us) 00:22:28.485 [2024-11-20T08:55:59.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.485 [2024-11-20T08:55:59.400Z] =================================================================================================================== 00:22:28.485 [2024-11-20T08:55:59.400Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.485 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:28.485 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:28.485 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82766' 00:22:28.485 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82766 00:22:28.485 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82766 00:22:28.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.745 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=83001 00:22:28.745 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:28.745 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 83001 /var/tmp/bdevperf.sock 00:22:28.745 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 83001 ']' 00:22:28.745 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.745 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.745 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.745 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.745 08:55:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:29.041 [2024-11-20 08:55:59.660196] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:29.041 [2024-11-20 08:55:59.660517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83001 ] 00:22:29.041 [2024-11-20 08:55:59.809445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.041 [2024-11-20 08:55:59.891214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.299 [2024-11-20 08:55:59.969637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:29.867 08:56:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.867 08:56:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:29.867 08:56:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=83014 00:22:29.867 08:56:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:29.867 08:56:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:30.437 08:56:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:30.696 NVMe0n1 00:22:30.696 08:56:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=83060 00:22:30.696 08:56:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.696 08:56:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:30.696 Running I/O for 10 seconds... 00:22:31.633 08:56:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:31.898 13980.00 IOPS, 54.61 MiB/s [2024-11-20T08:56:02.813Z] [2024-11-20 08:56:02.717133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.898 [2024-11-20 08:56:02.717951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.717959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.717968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.717976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.717985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.717993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ac10 is same with the state(6) to be set 00:22:31.899 [2024-11-20 08:56:02.718509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.899 [2024-11-20 08:56:02.718897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.899 [2024-11-20 08:56:02.718907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.718920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.718931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.718943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.718952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.718963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.718972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.718984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.718993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.900 [2024-11-20 08:56:02.719714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.900 [2024-11-20 08:56:02.719723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.719982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.719992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.901 [2024-11-20 08:56:02.720525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.901 [2024-11-20 08:56:02.720536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.720981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.720992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.902 [2024-11-20 08:56:02.721261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.902 [2024-11-20 08:56:02.721270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.903 [2024-11-20 08:56:02.721281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.903 [2024-11-20 08:56:02.721290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.903 [2024-11-20 08:56:02.721301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.903 [2024-11-20 08:56:02.721309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.903 [2024-11-20 08:56:02.721320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea310 is same with the state(6) to be set 00:22:31.903 [2024-11-20 08:56:02.721332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.903 [2024-11-20 08:56:02.721340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.903 [2024-11-20 08:56:02.721349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67904 len:8 PRP1 0x0 PRP2 0x0 00:22:31.903 [2024-11-20 08:56:02.721359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.903 [2024-11-20 08:56:02.721695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:31.903 [2024-11-20 08:56:02.721878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227ce50 (9): Bad file descriptor 00:22:31.903 [2024-11-20 08:56:02.722003] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.903 [2024-11-20 08:56:02.722024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227ce50 with addr=10.0.0.3, port=4420 00:22:31.903 [2024-11-20 08:56:02.722035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227ce50 is same with the state(6) to be set 00:22:31.903 [2024-11-20 08:56:02.722053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227ce50 (9): Bad file descriptor 00:22:31.903 [2024-11-20 08:56:02.722070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:31.903 [2024-11-20 08:56:02.722080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:31.903 [2024-11-20 08:56:02.722091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:31.903 [2024-11-20 08:56:02.722103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:31.903 [2024-11-20 08:56:02.722114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:31.903 08:56:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 83060 00:22:33.778 7880.00 IOPS, 30.78 MiB/s [2024-11-20T08:56:04.952Z] 5253.33 IOPS, 20.52 MiB/s [2024-11-20T08:56:04.952Z] [2024-11-20 08:56:04.722359] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.037 [2024-11-20 08:56:04.722569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227ce50 with addr=10.0.0.3, port=4420 00:22:34.037 [2024-11-20 08:56:04.722774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227ce50 is same with the state(6) to be set 00:22:34.037 [2024-11-20 08:56:04.723007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227ce50 (9): Bad file descriptor 00:22:34.037 [2024-11-20 08:56:04.723228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:34.037 [2024-11-20 08:56:04.723407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:34.037 [2024-11-20 08:56:04.723545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:34.037 [2024-11-20 08:56:04.723652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:34.037 [2024-11-20 08:56:04.723724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:35.911 3940.00 IOPS, 15.39 MiB/s [2024-11-20T08:56:06.826Z] 3152.00 IOPS, 12.31 MiB/s [2024-11-20T08:56:06.826Z] [2024-11-20 08:56:06.724106] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.911 [2024-11-20 08:56:06.724180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227ce50 with addr=10.0.0.3, port=4420 00:22:35.911 [2024-11-20 08:56:06.724196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227ce50 is same with the state(6) to be set 00:22:35.911 [2024-11-20 08:56:06.724230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227ce50 (9): Bad file descriptor 00:22:35.911 [2024-11-20 08:56:06.724250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:35.911 [2024-11-20 08:56:06.724260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:35.911 [2024-11-20 08:56:06.724271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:35.911 [2024-11-20 08:56:06.724282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:35.911 [2024-11-20 08:56:06.724293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:37.789 2626.67 IOPS, 10.26 MiB/s [2024-11-20T08:56:08.964Z] 2251.43 IOPS, 8.79 MiB/s [2024-11-20T08:56:08.964Z] [2024-11-20 08:56:08.724372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:38.049 [2024-11-20 08:56:08.724440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:38.049 [2024-11-20 08:56:08.724452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:38.049 [2024-11-20 08:56:08.724463] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:22:38.049 [2024-11-20 08:56:08.724477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:38.985 1970.00 IOPS, 7.70 MiB/s 00:22:38.985 Latency(us) 00:22:38.985 [2024-11-20T08:56:09.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.985 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:38.985 NVMe0n1 : 8.13 1937.70 7.57 15.74 0.00 65407.09 1556.48 7015926.69 00:22:38.985 [2024-11-20T08:56:09.900Z] =================================================================================================================== 00:22:38.985 [2024-11-20T08:56:09.900Z] Total : 1937.70 7.57 15.74 0.00 65407.09 1556.48 7015926.69 00:22:38.985 { 00:22:38.985 "results": [ 00:22:38.985 { 00:22:38.985 "job": "NVMe0n1", 00:22:38.985 "core_mask": "0x4", 00:22:38.985 "workload": "randread", 00:22:38.985 "status": "finished", 00:22:38.985 "queue_depth": 128, 00:22:38.985 "io_size": 4096, 00:22:38.985 "runtime": 8.133342, 00:22:38.985 "iops": 1937.7028532674515, 00:22:38.985 "mibps": 7.569151770575982, 00:22:38.985 "io_failed": 128, 00:22:38.985 "io_timeout": 0, 00:22:38.985 "avg_latency_us": 65407.08807470474, 00:22:38.985 "min_latency_us": 1556.48, 00:22:38.985 "max_latency_us": 7015926.69090909 00:22:38.985 } 00:22:38.985 ], 00:22:38.985 "core_count": 1 00:22:38.985 } 00:22:38.985 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.985 Attaching 5 probes... 00:22:38.985 1393.857006: reset bdev controller NVMe0 00:22:38.985 1394.095914: reconnect bdev controller NVMe0 00:22:38.985 3394.384757: reconnect delay bdev controller NVMe0 00:22:38.985 3394.409964: reconnect bdev controller NVMe0 00:22:38.985 5396.052097: reconnect delay bdev controller NVMe0 00:22:38.985 5396.095876: reconnect bdev controller NVMe0 00:22:38.985 7396.511841: reconnect delay bdev controller NVMe0 00:22:38.985 7396.535938: reconnect bdev controller NVMe0 00:22:38.985 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:38.985 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:38.985 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 83014 00:22:38.985 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.985 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 83001 00:22:38.985 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 83001 ']' 00:22:38.985 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 83001 00:22:38.985 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:38.985 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.985 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83001 00:22:38.985 killing process with pid 83001 00:22:38.985 Received shutdown signal, test time was about 8.202453 seconds 00:22:38.985 00:22:38.986 Latency(us) 00:22:38.986 [2024-11-20T08:56:09.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.986 [2024-11-20T08:56:09.901Z] =================================================================================================================== 00:22:38.986 [2024-11-20T08:56:09.901Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:38.986 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:38.986 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:38.986 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83001' 00:22:38.986 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 83001 00:22:38.986 08:56:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 83001 00:22:39.245 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.504 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:39.504 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:39.504 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:39.504 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.763 rmmod nvme_tcp 00:22:39.763 rmmod nvme_fabrics 00:22:39.763 rmmod nvme_keyring 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82568 ']' 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82568 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82568 ']' 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82568 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82568 00:22:39.763 killing process with pid 82568 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82568' 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82568 00:22:39.763 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82568 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:40.023 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:40.282 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:40.282 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:40.282 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:40.282 08:56:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:22:40.282 00:22:40.282 real 0m48.155s 00:22:40.282 user 2m20.800s 00:22:40.282 sys 0m6.014s 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.282 ************************************ 00:22:40.282 END TEST nvmf_timeout 00:22:40.282 ************************************ 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:40.282 00:22:40.282 real 5m16.889s 00:22:40.282 user 13m44.759s 00:22:40.282 sys 1m11.333s 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.282 08:56:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.282 ************************************ 00:22:40.282 END TEST nvmf_host 00:22:40.282 ************************************ 00:22:40.541 08:56:11 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:22:40.541 08:56:11 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:22:40.541 00:22:40.541 real 13m24.697s 00:22:40.541 user 32m18.133s 00:22:40.541 sys 3m18.078s 00:22:40.541 08:56:11 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.541 08:56:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:40.541 ************************************ 00:22:40.541 END TEST nvmf_tcp 00:22:40.541 ************************************ 00:22:40.541 08:56:11 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:22:40.541 08:56:11 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:40.541 08:56:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:40.541 08:56:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.541 08:56:11 -- common/autotest_common.sh@10 -- # set +x 00:22:40.541 ************************************ 00:22:40.541 START TEST nvmf_dif 00:22:40.541 ************************************ 00:22:40.541 08:56:11 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:40.541 * Looking for test storage... 00:22:40.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:40.541 08:56:11 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:40.541 08:56:11 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:22:40.541 08:56:11 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:40.541 08:56:11 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:40.542 08:56:11 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:22:40.542 08:56:11 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:40.542 08:56:11 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:40.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.542 --rc genhtml_branch_coverage=1 00:22:40.542 --rc genhtml_function_coverage=1 00:22:40.542 --rc genhtml_legend=1 00:22:40.542 --rc geninfo_all_blocks=1 00:22:40.542 --rc geninfo_unexecuted_blocks=1 00:22:40.542 00:22:40.542 ' 00:22:40.542 08:56:11 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:40.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.542 --rc genhtml_branch_coverage=1 00:22:40.542 --rc genhtml_function_coverage=1 00:22:40.542 --rc genhtml_legend=1 00:22:40.542 --rc geninfo_all_blocks=1 00:22:40.542 --rc geninfo_unexecuted_blocks=1 00:22:40.542 00:22:40.542 ' 00:22:40.542 08:56:11 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:40.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.542 --rc genhtml_branch_coverage=1 00:22:40.542 --rc genhtml_function_coverage=1 00:22:40.542 --rc genhtml_legend=1 00:22:40.542 --rc geninfo_all_blocks=1 00:22:40.542 --rc geninfo_unexecuted_blocks=1 00:22:40.542 00:22:40.542 ' 00:22:40.542 08:56:11 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:40.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.542 --rc genhtml_branch_coverage=1 00:22:40.542 --rc genhtml_function_coverage=1 00:22:40.542 --rc genhtml_legend=1 00:22:40.542 --rc geninfo_all_blocks=1 00:22:40.542 --rc geninfo_unexecuted_blocks=1 00:22:40.542 00:22:40.542 ' 00:22:40.542 08:56:11 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:40.542 08:56:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:40.542 08:56:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.542 08:56:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.542 08:56:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:40.802 08:56:11 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:22:40.802 08:56:11 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.802 08:56:11 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.802 08:56:11 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.802 08:56:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.802 08:56:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.802 08:56:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.802 08:56:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:40.802 08:56:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:40.802 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:40.802 08:56:11 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:40.802 08:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:40.802 08:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:40.802 08:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:40.802 08:56:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:40.802 08:56:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.803 08:56:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:40.803 08:56:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:40.803 Cannot find device "nvmf_init_br" 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:40.803 Cannot find device "nvmf_init_br2" 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:40.803 Cannot find device "nvmf_tgt_br" 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@164 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:40.803 Cannot find device "nvmf_tgt_br2" 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@165 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:40.803 Cannot find device "nvmf_init_br" 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@166 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:40.803 Cannot find device "nvmf_init_br2" 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@167 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:40.803 Cannot find device "nvmf_tgt_br" 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@168 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:40.803 Cannot find device "nvmf_tgt_br2" 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@169 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:40.803 Cannot find device "nvmf_br" 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@170 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:40.803 Cannot find device "nvmf_init_if" 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@171 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:40.803 Cannot find device "nvmf_init_if2" 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@172 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:40.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@173 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:40.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@174 -- # true 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:40.803 08:56:11 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:41.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:41.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:22:41.063 00:22:41.063 --- 10.0.0.3 ping statistics --- 00:22:41.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.063 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:41.063 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:41.063 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:22:41.063 00:22:41.063 --- 10.0.0.4 ping statistics --- 00:22:41.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.063 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:41.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:22:41.063 00:22:41.063 --- 10.0.0.1 ping statistics --- 00:22:41.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.063 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:41.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:22:41.063 00:22:41.063 --- 10.0.0.2 ping statistics --- 00:22:41.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.063 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.063 08:56:11 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:22:41.064 08:56:11 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:41.064 08:56:11 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:41.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:41.323 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:41.323 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:41.323 08:56:12 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.323 08:56:12 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:41.323 08:56:12 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:41.323 08:56:12 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.323 08:56:12 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:41.323 08:56:12 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:41.583 08:56:12 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:41.583 08:56:12 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:41.583 08:56:12 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:41.583 08:56:12 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.583 08:56:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:41.583 08:56:12 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83557 00:22:41.583 08:56:12 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83557 00:22:41.583 08:56:12 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:41.583 08:56:12 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83557 ']' 00:22:41.583 08:56:12 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.583 08:56:12 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.583 08:56:12 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.583 08:56:12 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.583 08:56:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:41.583 [2024-11-20 08:56:12.330114] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:41.583 [2024-11-20 08:56:12.330418] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.583 [2024-11-20 08:56:12.485572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.842 [2024-11-20 08:56:12.548039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.842 [2024-11-20 08:56:12.548383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.842 [2024-11-20 08:56:12.548421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.842 [2024-11-20 08:56:12.548433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.842 [2024-11-20 08:56:12.548442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.842 [2024-11-20 08:56:12.548973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.842 [2024-11-20 08:56:12.625781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:41.842 08:56:12 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.842 08:56:12 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:22:41.842 08:56:12 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.842 08:56:12 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.842 08:56:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:41.842 08:56:12 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.842 08:56:12 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:41.842 08:56:12 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:41.842 08:56:12 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.842 08:56:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:41.842 [2024-11-20 08:56:12.752556] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.101 08:56:12 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.101 08:56:12 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:42.101 08:56:12 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:42.101 08:56:12 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.101 08:56:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:42.101 ************************************ 00:22:42.101 START TEST fio_dif_1_default 00:22:42.101 ************************************ 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:42.102 bdev_null0 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:42.102 [2024-11-20 08:56:12.800773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.102 { 00:22:42.102 "params": { 00:22:42.102 "name": "Nvme$subsystem", 00:22:42.102 "trtype": "$TEST_TRANSPORT", 00:22:42.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.102 "adrfam": "ipv4", 00:22:42.102 "trsvcid": "$NVMF_PORT", 00:22:42.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.102 "hdgst": ${hdgst:-false}, 00:22:42.102 "ddgst": ${ddgst:-false} 00:22:42.102 }, 00:22:42.102 "method": "bdev_nvme_attach_controller" 00:22:42.102 } 00:22:42.102 EOF 00:22:42.102 )") 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:42.102 "params": { 00:22:42.102 "name": "Nvme0", 00:22:42.102 "trtype": "tcp", 00:22:42.102 "traddr": "10.0.0.3", 00:22:42.102 "adrfam": "ipv4", 00:22:42.102 "trsvcid": "4420", 00:22:42.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:42.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:42.102 "hdgst": false, 00:22:42.102 "ddgst": false 00:22:42.102 }, 00:22:42.102 "method": "bdev_nvme_attach_controller" 00:22:42.102 }' 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:42.102 08:56:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:42.362 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:42.362 fio-3.35 00:22:42.362 Starting 1 thread 00:22:54.570 00:22:54.570 filename0: (groupid=0, jobs=1): err= 0: pid=83616: Wed Nov 20 08:56:23 2024 00:22:54.570 read: IOPS=8854, BW=34.6MiB/s (36.3MB/s)(346MiB/10001msec) 00:22:54.570 slat (usec): min=6, max=340, avg= 8.56, stdev= 4.72 00:22:54.570 clat (usec): min=343, max=4397, avg=426.32, stdev=56.77 00:22:54.570 lat (usec): min=349, max=4438, avg=434.88, stdev=57.53 00:22:54.570 clat percentiles (usec): 00:22:54.570 | 1.00th=[ 351], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 388], 00:22:54.570 | 30.00th=[ 396], 40.00th=[ 408], 50.00th=[ 420], 60.00th=[ 433], 00:22:54.570 | 70.00th=[ 445], 80.00th=[ 461], 90.00th=[ 486], 95.00th=[ 510], 00:22:54.570 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 709], 99.95th=[ 906], 00:22:54.570 | 99.99th=[ 1434] 00:22:54.570 bw ( KiB/s): min=32032, max=36640, per=100.00%, avg=35444.21, stdev=1046.43, samples=19 00:22:54.571 iops : min= 8008, max= 9160, avg=8861.05, stdev=261.61, samples=19 00:22:54.571 lat (usec) : 500=93.23%, 750=6.69%, 1000=0.06% 00:22:54.571 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01% 00:22:54.571 cpu : usr=83.65%, sys=14.02%, ctx=92, majf=0, minf=9 00:22:54.571 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:54.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.571 issued rwts: total=88552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.571 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:54.571 00:22:54.571 Run status group 0 (all jobs): 00:22:54.571 READ: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=346MiB (363MB), run=10001-10001msec 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 ************************************ 00:22:54.571 END TEST fio_dif_1_default 00:22:54.571 ************************************ 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.571 00:22:54.571 real 0m11.133s 00:22:54.571 user 0m9.084s 00:22:54.571 sys 0m1.723s 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 08:56:23 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:54.571 08:56:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:54.571 08:56:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.571 08:56:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 ************************************ 00:22:54.571 START TEST fio_dif_1_multi_subsystems 00:22:54.571 ************************************ 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 bdev_null0 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 [2024-11-20 08:56:23.988219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.571 08:56:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 bdev_null1 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.571 { 00:22:54.571 "params": { 00:22:54.571 "name": "Nvme$subsystem", 00:22:54.571 "trtype": "$TEST_TRANSPORT", 00:22:54.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.571 "adrfam": "ipv4", 00:22:54.571 "trsvcid": "$NVMF_PORT", 00:22:54.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.571 "hdgst": ${hdgst:-false}, 00:22:54.571 "ddgst": ${ddgst:-false} 00:22:54.571 }, 00:22:54.571 "method": "bdev_nvme_attach_controller" 00:22:54.571 } 00:22:54.571 EOF 00:22:54.571 )") 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:54.571 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.572 { 00:22:54.572 "params": { 00:22:54.572 "name": "Nvme$subsystem", 00:22:54.572 "trtype": "$TEST_TRANSPORT", 00:22:54.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.572 "adrfam": "ipv4", 00:22:54.572 "trsvcid": "$NVMF_PORT", 00:22:54.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.572 "hdgst": ${hdgst:-false}, 00:22:54.572 "ddgst": ${ddgst:-false} 00:22:54.572 }, 00:22:54.572 "method": "bdev_nvme_attach_controller" 00:22:54.572 } 00:22:54.572 EOF 00:22:54.572 )") 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:54.572 "params": { 00:22:54.572 "name": "Nvme0", 00:22:54.572 "trtype": "tcp", 00:22:54.572 "traddr": "10.0.0.3", 00:22:54.572 "adrfam": "ipv4", 00:22:54.572 "trsvcid": "4420", 00:22:54.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:54.572 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:54.572 "hdgst": false, 00:22:54.572 "ddgst": false 00:22:54.572 }, 00:22:54.572 "method": "bdev_nvme_attach_controller" 00:22:54.572 },{ 00:22:54.572 "params": { 00:22:54.572 "name": "Nvme1", 00:22:54.572 "trtype": "tcp", 00:22:54.572 "traddr": "10.0.0.3", 00:22:54.572 "adrfam": "ipv4", 00:22:54.572 "trsvcid": "4420", 00:22:54.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.572 "hdgst": false, 00:22:54.572 "ddgst": false 00:22:54.572 }, 00:22:54.572 "method": "bdev_nvme_attach_controller" 00:22:54.572 }' 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:54.572 08:56:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:54.572 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:54.572 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:54.572 fio-3.35 00:22:54.572 Starting 2 threads 00:23:04.550 00:23:04.550 filename0: (groupid=0, jobs=1): err= 0: pid=83780: Wed Nov 20 08:56:34 2024 00:23:04.550 read: IOPS=4891, BW=19.1MiB/s (20.0MB/s)(191MiB/10001msec) 00:23:04.550 slat (nsec): min=6529, max=70745, avg=13737.04, stdev=5442.98 00:23:04.550 clat (usec): min=567, max=7195, avg=778.75, stdev=113.75 00:23:04.550 lat (usec): min=578, max=7228, avg=792.49, stdev=114.94 00:23:04.550 clat percentiles (usec): 00:23:04.550 | 1.00th=[ 627], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 717], 00:23:04.550 | 30.00th=[ 734], 40.00th=[ 750], 50.00th=[ 775], 60.00th=[ 791], 00:23:04.550 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 889], 00:23:04.550 | 99.00th=[ 947], 99.50th=[ 988], 99.90th=[ 2008], 99.95th=[ 2089], 00:23:04.550 | 99.99th=[ 2999] 00:23:04.550 bw ( KiB/s): min=17120, max=20064, per=50.01%, avg=19572.21, stdev=632.96, samples=19 00:23:04.550 iops : min= 4280, max= 5016, avg=4893.05, stdev=158.24, samples=19 00:23:04.550 lat (usec) : 750=38.54%, 1000=61.01% 00:23:04.550 lat (msec) : 2=0.33%, 4=0.11%, 10=0.01% 00:23:04.550 cpu : usr=87.92%, sys=10.40%, ctx=16, majf=0, minf=0 00:23:04.550 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:04.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:04.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:04.551 issued rwts: total=48916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:04.551 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:04.551 filename1: (groupid=0, jobs=1): err= 0: pid=83781: Wed Nov 20 08:56:34 2024 00:23:04.551 read: IOPS=4892, BW=19.1MiB/s (20.0MB/s)(191MiB/10001msec) 00:23:04.551 slat (nsec): min=6443, max=71013, avg=12912.74, stdev=4817.38 00:23:04.551 clat (usec): min=187, max=6077, avg=781.93, stdev=100.61 00:23:04.551 lat (usec): min=197, max=6114, avg=794.84, stdev=101.05 00:23:04.551 clat percentiles (usec): 00:23:04.551 | 1.00th=[ 668], 5.00th=[ 685], 10.00th=[ 701], 20.00th=[ 725], 00:23:04.551 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 791], 00:23:04.551 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 857], 95.00th=[ 881], 00:23:04.551 | 99.00th=[ 938], 99.50th=[ 979], 99.90th=[ 1958], 99.95th=[ 2008], 00:23:04.551 | 99.99th=[ 2147] 00:23:04.551 bw ( KiB/s): min=17218, max=20064, per=50.02%, avg=19577.37, stdev=611.91, samples=19 00:23:04.551 iops : min= 4304, max= 5016, avg=4894.32, stdev=153.09, samples=19 00:23:04.551 lat (usec) : 250=0.01%, 500=0.02%, 750=34.09%, 1000=65.47% 00:23:04.551 lat (msec) : 2=0.37%, 4=0.05%, 10=0.01% 00:23:04.551 cpu : usr=90.67%, sys=7.98%, ctx=14, majf=0, minf=0 00:23:04.551 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:04.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:04.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:04.551 issued rwts: total=48933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:04.551 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:04.551 00:23:04.551 Run status group 0 (all jobs): 00:23:04.551 READ: bw=38.2MiB/s (40.1MB/s), 19.1MiB/s-19.1MiB/s (20.0MB/s-20.0MB/s), io=382MiB (401MB), run=10001-10001msec 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:04.551 ************************************ 00:23:04.551 END TEST fio_dif_1_multi_subsystems 00:23:04.551 ************************************ 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.551 00:23:04.551 real 0m11.300s 00:23:04.551 user 0m18.739s 00:23:04.551 sys 0m2.154s 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.551 08:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:04.551 08:56:35 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:04.551 08:56:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:04.551 08:56:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.551 08:56:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:04.551 ************************************ 00:23:04.551 START TEST fio_dif_rand_params 00:23:04.551 ************************************ 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:04.551 bdev_null0 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:04.551 [2024-11-20 08:56:35.345402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:04.551 { 00:23:04.551 "params": { 00:23:04.551 "name": "Nvme$subsystem", 00:23:04.551 "trtype": "$TEST_TRANSPORT", 00:23:04.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.551 "adrfam": "ipv4", 00:23:04.551 "trsvcid": "$NVMF_PORT", 00:23:04.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.551 "hdgst": ${hdgst:-false}, 00:23:04.551 "ddgst": ${ddgst:-false} 00:23:04.551 }, 00:23:04.551 "method": "bdev_nvme_attach_controller" 00:23:04.551 } 00:23:04.551 EOF 00:23:04.551 )") 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:04.551 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:04.552 "params": { 00:23:04.552 "name": "Nvme0", 00:23:04.552 "trtype": "tcp", 00:23:04.552 "traddr": "10.0.0.3", 00:23:04.552 "adrfam": "ipv4", 00:23:04.552 "trsvcid": "4420", 00:23:04.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:04.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:04.552 "hdgst": false, 00:23:04.552 "ddgst": false 00:23:04.552 }, 00:23:04.552 "method": "bdev_nvme_attach_controller" 00:23:04.552 }' 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:04.552 08:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:04.811 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:04.811 ... 00:23:04.811 fio-3.35 00:23:04.811 Starting 3 threads 00:23:11.378 00:23:11.378 filename0: (groupid=0, jobs=1): err= 0: pid=83940: Wed Nov 20 08:56:41 2024 00:23:11.379 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(158MiB/5003msec) 00:23:11.379 slat (nsec): min=6417, max=68484, avg=20257.96, stdev=12299.60 00:23:11.379 clat (usec): min=5222, max=12675, avg=11809.13, stdev=452.95 00:23:11.379 lat (usec): min=5232, max=12690, avg=11829.39, stdev=453.96 00:23:11.379 clat percentiles (usec): 00:23:11.379 | 1.00th=[10945], 5.00th=[11338], 10.00th=[11469], 20.00th=[11600], 00:23:11.379 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:23:11.379 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12256], 95.00th=[12256], 00:23:11.379 | 99.00th=[12518], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:23:11.379 | 99.99th=[12649] 00:23:11.379 bw ( KiB/s): min=32256, max=33024, per=33.38%, avg=32426.67, stdev=338.66, samples=9 00:23:11.379 iops : min= 252, max= 258, avg=253.33, stdev= 2.65, samples=9 00:23:11.379 lat (msec) : 10=0.24%, 20=99.76% 00:23:11.379 cpu : usr=93.52%, sys=5.82%, ctx=7, majf=0, minf=0 00:23:11.379 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:11.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.379 issued rwts: total=1266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:11.379 filename0: (groupid=0, jobs=1): err= 0: pid=83941: Wed Nov 20 08:56:41 2024 00:23:11.379 read: IOPS=252, BW=31.6MiB/s (33.2MB/s)(158MiB/5005msec) 00:23:11.379 slat (usec): min=5, max=106, avg=23.93, stdev=13.97 00:23:11.379 clat (usec): min=7872, max=13899, avg=11804.03, stdev=395.23 00:23:11.379 lat (usec): min=7886, max=13915, avg=11827.96, stdev=396.66 00:23:11.379 clat percentiles (usec): 00:23:11.379 | 1.00th=[10683], 5.00th=[11207], 10.00th=[11338], 20.00th=[11600], 00:23:11.379 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:23:11.379 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12256], 95.00th=[12387], 00:23:11.379 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13829], 99.95th=[13960], 00:23:11.379 | 99.99th=[13960] 00:23:11.379 bw ( KiB/s): min=32256, max=33024, per=33.30%, avg=32341.33, stdev=256.00, samples=9 00:23:11.379 iops : min= 252, max= 258, avg=252.67, stdev= 2.00, samples=9 00:23:11.379 lat (msec) : 10=0.24%, 20=99.76% 00:23:11.379 cpu : usr=93.61%, sys=5.42%, ctx=79, majf=0, minf=0 00:23:11.379 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:11.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.379 issued rwts: total=1266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:11.379 filename0: (groupid=0, jobs=1): err= 0: pid=83942: Wed Nov 20 08:56:41 2024 00:23:11.379 read: IOPS=252, BW=31.6MiB/s (33.2MB/s)(158MiB/5005msec) 00:23:11.379 slat (usec): min=6, max=106, avg=23.91, stdev=14.07 00:23:11.379 clat (usec): min=7864, max=13849, avg=11805.10, stdev=412.55 00:23:11.379 lat (usec): min=7878, max=13882, avg=11829.00, stdev=413.84 00:23:11.379 clat percentiles (usec): 00:23:11.379 | 1.00th=[10683], 5.00th=[11207], 10.00th=[11338], 20.00th=[11600], 00:23:11.379 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:23:11.379 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12256], 95.00th=[12387], 00:23:11.379 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13829], 99.95th=[13829], 00:23:11.379 | 99.99th=[13829] 00:23:11.379 bw ( KiB/s): min=32256, max=33024, per=33.30%, avg=32341.33, stdev=256.00, samples=9 00:23:11.379 iops : min= 252, max= 258, avg=252.67, stdev= 2.00, samples=9 00:23:11.379 lat (msec) : 10=0.47%, 20=99.53% 00:23:11.379 cpu : usr=93.94%, sys=5.44%, ctx=6, majf=0, minf=0 00:23:11.379 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:11.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.379 issued rwts: total=1266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:11.379 00:23:11.379 Run status group 0 (all jobs): 00:23:11.379 READ: bw=94.9MiB/s (99.5MB/s), 31.6MiB/s-31.6MiB/s (33.2MB/s-33.2MB/s), io=475MiB (498MB), run=5003-5005msec 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 bdev_null0 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 [2024-11-20 08:56:41.530973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 bdev_null1 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 bdev_null2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.379 { 00:23:11.379 "params": { 00:23:11.379 "name": "Nvme$subsystem", 00:23:11.379 "trtype": "$TEST_TRANSPORT", 00:23:11.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.379 "adrfam": "ipv4", 00:23:11.379 "trsvcid": "$NVMF_PORT", 00:23:11.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.379 "hdgst": ${hdgst:-false}, 00:23:11.379 "ddgst": ${ddgst:-false} 00:23:11.379 }, 00:23:11.379 "method": "bdev_nvme_attach_controller" 00:23:11.379 } 00:23:11.379 EOF 00:23:11.379 )") 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:11.379 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.380 { 00:23:11.380 "params": { 00:23:11.380 "name": "Nvme$subsystem", 00:23:11.380 "trtype": "$TEST_TRANSPORT", 00:23:11.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.380 "adrfam": "ipv4", 00:23:11.380 "trsvcid": "$NVMF_PORT", 00:23:11.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.380 "hdgst": ${hdgst:-false}, 00:23:11.380 "ddgst": ${ddgst:-false} 00:23:11.380 }, 00:23:11.380 "method": "bdev_nvme_attach_controller" 00:23:11.380 } 00:23:11.380 EOF 00:23:11.380 )") 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.380 { 00:23:11.380 "params": { 00:23:11.380 "name": "Nvme$subsystem", 00:23:11.380 "trtype": "$TEST_TRANSPORT", 00:23:11.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.380 "adrfam": "ipv4", 00:23:11.380 "trsvcid": "$NVMF_PORT", 00:23:11.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.380 "hdgst": ${hdgst:-false}, 00:23:11.380 "ddgst": ${ddgst:-false} 00:23:11.380 }, 00:23:11.380 "method": "bdev_nvme_attach_controller" 00:23:11.380 } 00:23:11.380 EOF 00:23:11.380 )") 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:11.380 "params": { 00:23:11.380 "name": "Nvme0", 00:23:11.380 "trtype": "tcp", 00:23:11.380 "traddr": "10.0.0.3", 00:23:11.380 "adrfam": "ipv4", 00:23:11.380 "trsvcid": "4420", 00:23:11.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:11.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:11.380 "hdgst": false, 00:23:11.380 "ddgst": false 00:23:11.380 }, 00:23:11.380 "method": "bdev_nvme_attach_controller" 00:23:11.380 },{ 00:23:11.380 "params": { 00:23:11.380 "name": "Nvme1", 00:23:11.380 "trtype": "tcp", 00:23:11.380 "traddr": "10.0.0.3", 00:23:11.380 "adrfam": "ipv4", 00:23:11.380 "trsvcid": "4420", 00:23:11.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.380 "hdgst": false, 00:23:11.380 "ddgst": false 00:23:11.380 }, 00:23:11.380 "method": "bdev_nvme_attach_controller" 00:23:11.380 },{ 00:23:11.380 "params": { 00:23:11.380 "name": "Nvme2", 00:23:11.380 "trtype": "tcp", 00:23:11.380 "traddr": "10.0.0.3", 00:23:11.380 "adrfam": "ipv4", 00:23:11.380 "trsvcid": "4420", 00:23:11.380 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:11.380 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:11.380 "hdgst": false, 00:23:11.380 "ddgst": false 00:23:11.380 }, 00:23:11.380 "method": "bdev_nvme_attach_controller" 00:23:11.380 }' 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:11.380 08:56:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:11.380 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:11.380 ... 00:23:11.380 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:11.380 ... 00:23:11.380 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:11.380 ... 00:23:11.380 fio-3.35 00:23:11.380 Starting 24 threads 00:23:23.588 00:23:23.588 filename0: (groupid=0, jobs=1): err= 0: pid=84042: Wed Nov 20 08:56:52 2024 00:23:23.588 read: IOPS=177, BW=709KiB/s (726kB/s)(7100KiB/10017msec) 00:23:23.588 slat (usec): min=5, max=12029, avg=44.71, stdev=475.08 00:23:23.588 clat (msec): min=24, max=192, avg=90.05, stdev=26.68 00:23:23.588 lat (msec): min=24, max=192, avg=90.09, stdev=26.66 00:23:23.588 clat percentiles (msec): 00:23:23.588 | 1.00th=[ 27], 5.00th=[ 46], 10.00th=[ 58], 20.00th=[ 71], 00:23:23.588 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 94], 60.00th=[ 99], 00:23:23.588 | 70.00th=[ 108], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 132], 00:23:23.588 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 192], 00:23:23.588 | 99.99th=[ 192] 00:23:23.588 bw ( KiB/s): min= 512, max= 1131, per=4.12%, avg=705.50, stdev=155.97, samples=20 00:23:23.588 iops : min= 128, max= 282, avg=176.30, stdev=38.89, samples=20 00:23:23.588 lat (msec) : 50=8.68%, 100=52.28%, 250=39.04% 00:23:23.588 cpu : usr=32.03%, sys=1.11%, ctx=859, majf=0, minf=9 00:23:23.588 IO depths : 1=0.1%, 2=1.5%, 4=5.6%, 8=77.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:23:23.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.588 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.588 issued rwts: total=1775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.588 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.588 filename0: (groupid=0, jobs=1): err= 0: pid=84043: Wed Nov 20 08:56:52 2024 00:23:23.588 read: IOPS=185, BW=742KiB/s (760kB/s)(7428KiB/10008msec) 00:23:23.588 slat (usec): min=5, max=12025, avg=25.74, stdev=294.06 00:23:23.588 clat (msec): min=8, max=156, avg=86.12, stdev=26.15 00:23:23.588 lat (msec): min=8, max=156, avg=86.15, stdev=26.14 00:23:23.588 clat percentiles (msec): 00:23:23.588 | 1.00th=[ 14], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 69], 00:23:23.588 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 96], 00:23:23.588 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 124], 00:23:23.588 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:23:23.588 | 99.99th=[ 157] 00:23:23.588 bw ( KiB/s): min= 608, max= 1024, per=4.19%, avg=716.21, stdev=99.61, samples=19 00:23:23.588 iops : min= 152, max= 256, avg=179.05, stdev=24.90, samples=19 00:23:23.588 lat (msec) : 10=0.48%, 20=0.81%, 50=8.67%, 100=56.49%, 250=33.55% 00:23:23.588 cpu : usr=31.83%, sys=1.20%, ctx=924, majf=0, minf=9 00:23:23.588 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:23.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.588 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.588 issued rwts: total=1857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.588 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.588 filename0: (groupid=0, jobs=1): err= 0: pid=84044: Wed Nov 20 08:56:52 2024 00:23:23.588 read: IOPS=156, BW=628KiB/s (643kB/s)(6280KiB/10002msec) 00:23:23.588 slat (usec): min=5, max=5027, avg=24.70, stdev=162.63 00:23:23.588 clat (msec): min=4, max=193, avg=101.75, stdev=31.27 00:23:23.588 lat (msec): min=4, max=193, avg=101.77, stdev=31.27 00:23:23.588 clat percentiles (msec): 00:23:23.588 | 1.00th=[ 8], 5.00th=[ 48], 10.00th=[ 62], 20.00th=[ 75], 00:23:23.588 | 30.00th=[ 94], 40.00th=[ 103], 50.00th=[ 108], 60.00th=[ 111], 00:23:23.588 | 70.00th=[ 117], 80.00th=[ 121], 90.00th=[ 138], 95.00th=[ 148], 00:23:23.588 | 99.00th=[ 188], 99.50th=[ 188], 99.90th=[ 194], 99.95th=[ 194], 00:23:23.588 | 99.99th=[ 194] 00:23:23.588 bw ( KiB/s): min= 384, max= 880, per=3.45%, avg=589.47, stdev=119.26, samples=19 00:23:23.588 iops : min= 96, max= 220, avg=147.37, stdev=29.81, samples=19 00:23:23.588 lat (msec) : 10=1.08%, 20=0.96%, 50=4.27%, 100=32.42%, 250=61.27% 00:23:23.588 cpu : usr=42.70%, sys=1.99%, ctx=1232, majf=0, minf=9 00:23:23.588 IO depths : 1=0.2%, 2=5.0%, 4=19.8%, 8=62.0%, 16=13.0%, 32=0.0%, >=64=0.0% 00:23:23.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 complete : 0=0.0%, 4=92.8%, 8=2.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 issued rwts: total=1570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.589 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.589 filename0: (groupid=0, jobs=1): err= 0: pid=84045: Wed Nov 20 08:56:52 2024 00:23:23.589 read: IOPS=167, BW=669KiB/s (685kB/s)(6696KiB/10003msec) 00:23:23.589 slat (usec): min=5, max=8040, avg=43.28, stdev=445.47 00:23:23.589 clat (msec): min=3, max=159, avg=95.34, stdev=30.61 00:23:23.589 lat (msec): min=3, max=159, avg=95.38, stdev=30.62 00:23:23.589 clat percentiles (msec): 00:23:23.589 | 1.00th=[ 6], 5.00th=[ 46], 10.00th=[ 59], 20.00th=[ 72], 00:23:23.589 | 30.00th=[ 75], 40.00th=[ 92], 50.00th=[ 104], 60.00th=[ 108], 00:23:23.589 | 70.00th=[ 112], 80.00th=[ 121], 90.00th=[ 131], 95.00th=[ 142], 00:23:23.589 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 159], 00:23:23.589 | 99.99th=[ 159] 00:23:23.589 bw ( KiB/s): min= 400, max= 785, per=3.66%, avg=625.32, stdev=97.17, samples=19 00:23:23.589 iops : min= 100, max= 196, avg=156.32, stdev=24.27, samples=19 00:23:23.589 lat (msec) : 4=0.60%, 10=1.67%, 20=0.90%, 50=2.63%, 100=40.62% 00:23:23.589 lat (msec) : 250=53.58% 00:23:23.589 cpu : usr=35.41%, sys=1.46%, ctx=1133, majf=0, minf=9 00:23:23.589 IO depths : 1=0.1%, 2=3.0%, 4=11.9%, 8=70.8%, 16=14.2%, 32=0.0%, >=64=0.0% 00:23:23.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 complete : 0=0.0%, 4=90.3%, 8=7.1%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 issued rwts: total=1674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.589 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.589 filename0: (groupid=0, jobs=1): err= 0: pid=84046: Wed Nov 20 08:56:52 2024 00:23:23.589 read: IOPS=178, BW=712KiB/s (729kB/s)(7140KiB/10026msec) 00:23:23.589 slat (usec): min=5, max=4063, avg=26.35, stdev=165.22 00:23:23.589 clat (msec): min=24, max=169, avg=89.65, stdev=27.46 00:23:23.589 lat (msec): min=24, max=169, avg=89.68, stdev=27.46 00:23:23.589 clat percentiles (msec): 00:23:23.589 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 56], 20.00th=[ 70], 00:23:23.589 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 91], 60.00th=[ 102], 00:23:23.589 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 131], 00:23:23.589 | 99.00th=[ 148], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 169], 00:23:23.589 | 99.99th=[ 169] 00:23:23.589 bw ( KiB/s): min= 512, max= 1280, per=4.15%, avg=710.10, stdev=181.00, samples=20 00:23:23.589 iops : min= 128, max= 320, avg=177.50, stdev=45.25, samples=20 00:23:23.589 lat (msec) : 50=9.08%, 100=49.97%, 250=40.95% 00:23:23.589 cpu : usr=44.89%, sys=1.72%, ctx=1355, majf=0, minf=9 00:23:23.589 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=77.5%, 16=15.2%, 32=0.0%, >=64=0.0% 00:23:23.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 complete : 0=0.0%, 4=88.6%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 issued rwts: total=1785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.589 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.589 filename0: (groupid=0, jobs=1): err= 0: pid=84047: Wed Nov 20 08:56:52 2024 00:23:23.589 read: IOPS=174, BW=697KiB/s (714kB/s)(7008KiB/10050msec) 00:23:23.589 slat (usec): min=5, max=3338, avg=18.22, stdev=79.90 00:23:23.589 clat (msec): min=4, max=160, avg=91.58, stdev=32.83 00:23:23.589 lat (msec): min=4, max=160, avg=91.59, stdev=32.83 00:23:23.589 clat percentiles (msec): 00:23:23.589 | 1.00th=[ 6], 5.00th=[ 22], 10.00th=[ 46], 20.00th=[ 69], 00:23:23.589 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 102], 60.00th=[ 107], 00:23:23.589 | 70.00th=[ 111], 80.00th=[ 118], 90.00th=[ 125], 95.00th=[ 138], 00:23:23.589 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 161], 00:23:23.589 | 99.99th=[ 161] 00:23:23.589 bw ( KiB/s): min= 512, max= 1904, per=4.05%, avg=693.95, stdev=299.70, samples=20 00:23:23.589 iops : min= 128, max= 476, avg=173.45, stdev=74.94, samples=20 00:23:23.589 lat (msec) : 10=1.83%, 20=2.74%, 50=6.45%, 100=37.50%, 250=51.48% 00:23:23.589 cpu : usr=43.88%, sys=1.95%, ctx=1250, majf=0, minf=0 00:23:23.589 IO depths : 1=0.1%, 2=2.5%, 4=10.0%, 8=72.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:23.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 complete : 0=0.0%, 4=90.1%, 8=7.7%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 issued rwts: total=1752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.589 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.589 filename0: (groupid=0, jobs=1): err= 0: pid=84048: Wed Nov 20 08:56:52 2024 00:23:23.589 read: IOPS=184, BW=737KiB/s (755kB/s)(7376KiB/10008msec) 00:23:23.589 slat (usec): min=4, max=8040, avg=32.14, stdev=294.86 00:23:23.589 clat (msec): min=8, max=144, avg=86.64, stdev=27.10 00:23:23.589 lat (msec): min=8, max=144, avg=86.67, stdev=27.10 00:23:23.589 clat percentiles (msec): 00:23:23.589 | 1.00th=[ 22], 5.00th=[ 40], 10.00th=[ 51], 20.00th=[ 66], 00:23:23.589 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 96], 00:23:23.589 | 70.00th=[ 107], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 128], 00:23:23.589 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:23:23.589 | 99.99th=[ 144] 00:23:23.589 bw ( KiB/s): min= 512, max= 1152, per=4.15%, avg=709.05, stdev=137.85, samples=19 00:23:23.589 iops : min= 128, max= 288, avg=177.26, stdev=34.46, samples=19 00:23:23.589 lat (msec) : 10=0.16%, 20=0.70%, 50=9.11%, 100=52.06%, 250=37.96% 00:23:23.589 cpu : usr=37.43%, sys=1.76%, ctx=1060, majf=0, minf=9 00:23:23.589 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:23:23.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 complete : 0=0.0%, 4=88.0%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 issued rwts: total=1844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.589 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.589 filename0: (groupid=0, jobs=1): err= 0: pid=84049: Wed Nov 20 08:56:52 2024 00:23:23.589 read: IOPS=184, BW=736KiB/s (754kB/s)(7372KiB/10012msec) 00:23:23.589 slat (usec): min=5, max=4037, avg=23.67, stdev=94.20 00:23:23.589 clat (msec): min=10, max=153, avg=86.79, stdev=26.03 00:23:23.589 lat (msec): min=10, max=153, avg=86.81, stdev=26.03 00:23:23.589 clat percentiles (msec): 00:23:23.589 | 1.00th=[ 21], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 67], 00:23:23.589 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 96], 00:23:23.589 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 125], 00:23:23.589 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 155], 00:23:23.589 | 99.99th=[ 155] 00:23:23.589 bw ( KiB/s): min= 560, max= 1032, per=4.16%, avg=711.16, stdev=101.71, samples=19 00:23:23.589 iops : min= 140, max= 258, avg=177.79, stdev=25.43, samples=19 00:23:23.589 lat (msec) : 20=0.81%, 50=8.36%, 100=56.81%, 250=34.02% 00:23:23.589 cpu : usr=33.34%, sys=1.30%, ctx=985, majf=0, minf=9 00:23:23.589 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=82.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:23.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 issued rwts: total=1843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.589 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.589 filename1: (groupid=0, jobs=1): err= 0: pid=84050: Wed Nov 20 08:56:52 2024 00:23:23.589 read: IOPS=171, BW=685KiB/s (702kB/s)(6880KiB/10042msec) 00:23:23.589 slat (usec): min=4, max=8037, avg=20.83, stdev=193.60 00:23:23.589 clat (msec): min=24, max=182, avg=93.13, stdev=27.30 00:23:23.589 lat (msec): min=24, max=182, avg=93.15, stdev=27.31 00:23:23.589 clat percentiles (msec): 00:23:23.589 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 60], 20.00th=[ 72], 00:23:23.589 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 108], 00:23:23.589 | 70.00th=[ 109], 80.00th=[ 118], 90.00th=[ 121], 95.00th=[ 132], 00:23:23.589 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 184], 99.95th=[ 184], 00:23:23.589 | 99.99th=[ 184] 00:23:23.589 bw ( KiB/s): min= 512, max= 1392, per=4.00%, avg=683.80, stdev=195.27, samples=20 00:23:23.589 iops : min= 128, max= 348, avg=170.95, stdev=48.82, samples=20 00:23:23.589 lat (msec) : 50=7.56%, 100=47.33%, 250=45.12% 00:23:23.589 cpu : usr=31.87%, sys=1.08%, ctx=914, majf=0, minf=9 00:23:23.589 IO depths : 1=0.2%, 2=2.4%, 4=9.0%, 8=73.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:23.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 complete : 0=0.0%, 4=89.8%, 8=8.2%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 issued rwts: total=1720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.589 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.589 filename1: (groupid=0, jobs=1): err= 0: pid=84051: Wed Nov 20 08:56:52 2024 00:23:23.589 read: IOPS=172, BW=690KiB/s (706kB/s)(6932KiB/10053msec) 00:23:23.589 slat (usec): min=3, max=8042, avg=35.54, stdev=385.00 00:23:23.589 clat (msec): min=4, max=167, avg=92.57, stdev=33.35 00:23:23.589 lat (msec): min=4, max=167, avg=92.60, stdev=33.35 00:23:23.589 clat percentiles (msec): 00:23:23.589 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 48], 20.00th=[ 70], 00:23:23.589 | 30.00th=[ 73], 40.00th=[ 86], 50.00th=[ 104], 60.00th=[ 108], 00:23:23.589 | 70.00th=[ 115], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 136], 00:23:23.589 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 167], 00:23:23.589 | 99.99th=[ 167] 00:23:23.589 bw ( KiB/s): min= 496, max= 1788, per=4.01%, avg=686.85, stdev=280.20, samples=20 00:23:23.589 iops : min= 124, max= 447, avg=171.65, stdev=70.04, samples=20 00:23:23.589 lat (msec) : 10=1.73%, 20=2.89%, 50=6.98%, 100=37.28%, 250=51.13% 00:23:23.589 cpu : usr=33.39%, sys=1.12%, ctx=893, majf=0, minf=0 00:23:23.589 IO depths : 1=0.1%, 2=2.6%, 4=10.7%, 8=71.7%, 16=14.9%, 32=0.0%, >=64=0.0% 00:23:23.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 complete : 0=0.0%, 4=90.4%, 8=7.3%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.589 issued rwts: total=1733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.589 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.589 filename1: (groupid=0, jobs=1): err= 0: pid=84052: Wed Nov 20 08:56:52 2024 00:23:23.589 read: IOPS=183, BW=732KiB/s (750kB/s)(7336KiB/10020msec) 00:23:23.589 slat (usec): min=4, max=8034, avg=34.15, stdev=374.14 00:23:23.589 clat (msec): min=25, max=155, avg=87.26, stdev=25.61 00:23:23.589 lat (msec): min=25, max=155, avg=87.30, stdev=25.62 00:23:23.589 clat percentiles (msec): 00:23:23.589 | 1.00th=[ 31], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 69], 00:23:23.589 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 96], 00:23:23.589 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 129], 00:23:23.589 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 157], 00:23:23.589 | 99.99th=[ 157] 00:23:23.590 bw ( KiB/s): min= 560, max= 1081, per=4.26%, avg=728.20, stdev=135.03, samples=20 00:23:23.590 iops : min= 140, max= 270, avg=182.00, stdev=33.74, samples=20 00:23:23.590 lat (msec) : 50=9.54%, 100=56.27%, 250=34.19% 00:23:23.590 cpu : usr=31.74%, sys=1.21%, ctx=898, majf=0, minf=9 00:23:23.590 IO depths : 1=0.2%, 2=0.5%, 4=1.7%, 8=81.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:23.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 issued rwts: total=1834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.590 filename1: (groupid=0, jobs=1): err= 0: pid=84053: Wed Nov 20 08:56:52 2024 00:23:23.590 read: IOPS=183, BW=734KiB/s (751kB/s)(7372KiB/10047msec) 00:23:23.590 slat (usec): min=4, max=8030, avg=22.75, stdev=186.88 00:23:23.590 clat (msec): min=14, max=173, avg=87.01, stdev=27.85 00:23:23.590 lat (msec): min=14, max=173, avg=87.03, stdev=27.85 00:23:23.590 clat percentiles (msec): 00:23:23.590 | 1.00th=[ 18], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 66], 00:23:23.590 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 87], 60.00th=[ 99], 00:23:23.590 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 127], 00:23:23.590 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 174], 00:23:23.590 | 99.99th=[ 174] 00:23:23.590 bw ( KiB/s): min= 512, max= 1592, per=4.27%, avg=730.85, stdev=225.91, samples=20 00:23:23.590 iops : min= 128, max= 398, avg=182.70, stdev=56.48, samples=20 00:23:23.590 lat (msec) : 20=1.09%, 50=10.04%, 100=49.54%, 250=39.34% 00:23:23.590 cpu : usr=40.37%, sys=1.52%, ctx=1638, majf=0, minf=9 00:23:23.590 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:23.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 issued rwts: total=1843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.590 filename1: (groupid=0, jobs=1): err= 0: pid=84054: Wed Nov 20 08:56:52 2024 00:23:23.590 read: IOPS=186, BW=745KiB/s (763kB/s)(7476KiB/10030msec) 00:23:23.590 slat (usec): min=3, max=8038, avg=39.39, stdev=346.82 00:23:23.590 clat (msec): min=12, max=152, avg=85.69, stdev=27.12 00:23:23.590 lat (msec): min=12, max=152, avg=85.73, stdev=27.14 00:23:23.590 clat percentiles (msec): 00:23:23.590 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 63], 00:23:23.590 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 96], 00:23:23.590 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 125], 00:23:23.590 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 153], 00:23:23.590 | 99.99th=[ 153] 00:23:23.590 bw ( KiB/s): min= 512, max= 1368, per=4.33%, avg=741.20, stdev=186.20, samples=20 00:23:23.590 iops : min= 128, max= 342, avg=185.30, stdev=46.55, samples=20 00:23:23.590 lat (msec) : 20=0.05%, 50=12.47%, 100=51.90%, 250=35.58% 00:23:23.590 cpu : usr=40.54%, sys=1.60%, ctx=1241, majf=0, minf=9 00:23:23.590 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:23.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 issued rwts: total=1869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.590 filename1: (groupid=0, jobs=1): err= 0: pid=84055: Wed Nov 20 08:56:52 2024 00:23:23.590 read: IOPS=184, BW=737KiB/s (755kB/s)(7400KiB/10042msec) 00:23:23.590 slat (usec): min=7, max=8031, avg=34.36, stdev=348.41 00:23:23.590 clat (msec): min=13, max=155, avg=86.61, stdev=28.79 00:23:23.590 lat (msec): min=13, max=155, avg=86.65, stdev=28.79 00:23:23.590 clat percentiles (msec): 00:23:23.590 | 1.00th=[ 15], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 66], 00:23:23.590 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 99], 00:23:23.590 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 124], 00:23:23.590 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 148], 99.95th=[ 157], 00:23:23.590 | 99.99th=[ 157] 00:23:23.590 bw ( KiB/s): min= 560, max= 1648, per=4.29%, avg=733.65, stdev=239.49, samples=20 00:23:23.590 iops : min= 140, max= 412, avg=183.40, stdev=59.87, samples=20 00:23:23.590 lat (msec) : 20=2.92%, 50=9.95%, 100=49.41%, 250=37.73% 00:23:23.590 cpu : usr=39.03%, sys=1.55%, ctx=1060, majf=0, minf=9 00:23:23.590 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:23.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 issued rwts: total=1850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.590 filename1: (groupid=0, jobs=1): err= 0: pid=84056: Wed Nov 20 08:56:52 2024 00:23:23.590 read: IOPS=177, BW=708KiB/s (725kB/s)(7096KiB/10019msec) 00:23:23.590 slat (usec): min=4, max=8047, avg=30.11, stdev=329.75 00:23:23.590 clat (msec): min=21, max=156, avg=90.14, stdev=26.17 00:23:23.590 lat (msec): min=21, max=156, avg=90.17, stdev=26.16 00:23:23.590 clat percentiles (msec): 00:23:23.590 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 58], 20.00th=[ 72], 00:23:23.590 | 30.00th=[ 74], 40.00th=[ 83], 50.00th=[ 92], 60.00th=[ 101], 00:23:23.590 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 131], 00:23:23.590 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 157], 00:23:23.590 | 99.99th=[ 157] 00:23:23.590 bw ( KiB/s): min= 528, max= 1126, per=4.12%, avg=705.30, stdev=146.90, samples=20 00:23:23.590 iops : min= 132, max= 281, avg=176.30, stdev=36.65, samples=20 00:23:23.590 lat (msec) : 50=8.57%, 100=51.80%, 250=39.63% 00:23:23.590 cpu : usr=36.94%, sys=1.53%, ctx=1209, majf=0, minf=9 00:23:23.590 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:23:23.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 complete : 0=0.0%, 4=88.6%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 issued rwts: total=1774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.590 filename1: (groupid=0, jobs=1): err= 0: pid=84057: Wed Nov 20 08:56:52 2024 00:23:23.590 read: IOPS=183, BW=733KiB/s (751kB/s)(7352KiB/10026msec) 00:23:23.590 slat (usec): min=4, max=12025, avg=29.85, stdev=349.65 00:23:23.590 clat (msec): min=17, max=172, avg=87.05, stdev=27.37 00:23:23.590 lat (msec): min=17, max=172, avg=87.08, stdev=27.37 00:23:23.590 clat percentiles (msec): 00:23:23.590 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 67], 00:23:23.590 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 86], 60.00th=[ 97], 00:23:23.590 | 70.00th=[ 107], 80.00th=[ 114], 90.00th=[ 120], 95.00th=[ 128], 00:23:23.590 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 174], 00:23:23.590 | 99.99th=[ 174] 00:23:23.590 bw ( KiB/s): min= 560, max= 1290, per=4.28%, avg=731.40, stdev=180.00, samples=20 00:23:23.590 iops : min= 140, max= 322, avg=182.80, stdev=44.93, samples=20 00:23:23.590 lat (msec) : 20=0.87%, 50=10.45%, 100=50.82%, 250=37.87% 00:23:23.590 cpu : usr=40.82%, sys=1.60%, ctx=1280, majf=0, minf=9 00:23:23.590 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=82.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:23.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.590 filename2: (groupid=0, jobs=1): err= 0: pid=84058: Wed Nov 20 08:56:52 2024 00:23:23.590 read: IOPS=167, BW=672KiB/s (688kB/s)(6724KiB/10010msec) 00:23:23.590 slat (nsec): min=3693, max=60324, avg=17592.05, stdev=9388.45 00:23:23.590 clat (msec): min=10, max=183, avg=95.15, stdev=28.80 00:23:23.590 lat (msec): min=10, max=183, avg=95.16, stdev=28.81 00:23:23.590 clat percentiles (msec): 00:23:23.590 | 1.00th=[ 24], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:23:23.590 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 99], 60.00th=[ 108], 00:23:23.590 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 144], 00:23:23.590 | 99.00th=[ 167], 99.50th=[ 167], 99.90th=[ 184], 99.95th=[ 184], 00:23:23.590 | 99.99th=[ 184] 00:23:23.590 bw ( KiB/s): min= 496, max= 1024, per=3.77%, avg=645.05, stdev=118.93, samples=19 00:23:23.590 iops : min= 124, max= 256, avg=161.26, stdev=29.73, samples=19 00:23:23.590 lat (msec) : 20=0.77%, 50=6.42%, 100=45.51%, 250=47.29% 00:23:23.590 cpu : usr=32.35%, sys=1.27%, ctx=871, majf=0, minf=9 00:23:23.590 IO depths : 1=0.1%, 2=2.8%, 4=11.1%, 8=71.6%, 16=14.4%, 32=0.0%, >=64=0.0% 00:23:23.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 complete : 0=0.0%, 4=90.1%, 8=7.4%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 issued rwts: total=1681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.590 filename2: (groupid=0, jobs=1): err= 0: pid=84059: Wed Nov 20 08:56:52 2024 00:23:23.590 read: IOPS=190, BW=763KiB/s (782kB/s)(7636KiB/10005msec) 00:23:23.590 slat (usec): min=5, max=8032, avg=37.77, stdev=355.03 00:23:23.590 clat (msec): min=4, max=143, avg=83.71, stdev=27.53 00:23:23.590 lat (msec): min=4, max=143, avg=83.75, stdev=27.53 00:23:23.590 clat percentiles (msec): 00:23:23.590 | 1.00th=[ 9], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 61], 00:23:23.590 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 95], 00:23:23.590 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 127], 00:23:23.590 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:23:23.590 | 99.99th=[ 144] 00:23:23.590 bw ( KiB/s): min= 568, max= 1112, per=4.27%, avg=730.95, stdev=124.57, samples=19 00:23:23.590 iops : min= 142, max= 278, avg=182.74, stdev=31.14, samples=19 00:23:23.590 lat (msec) : 10=1.52%, 20=0.31%, 50=10.58%, 100=55.32%, 250=32.27% 00:23:23.590 cpu : usr=39.23%, sys=1.48%, ctx=1065, majf=0, minf=9 00:23:23.590 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=83.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:23.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.590 issued rwts: total=1909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.590 filename2: (groupid=0, jobs=1): err= 0: pid=84060: Wed Nov 20 08:56:52 2024 00:23:23.590 read: IOPS=184, BW=738KiB/s (756kB/s)(7384KiB/10003msec) 00:23:23.590 slat (usec): min=5, max=8039, avg=28.68, stdev=229.49 00:23:23.590 clat (usec): min=1423, max=189130, avg=86551.45, stdev=29589.32 00:23:23.591 lat (usec): min=1432, max=189153, avg=86580.13, stdev=29590.34 00:23:23.591 clat percentiles (msec): 00:23:23.591 | 1.00th=[ 4], 5.00th=[ 40], 10.00th=[ 55], 20.00th=[ 66], 00:23:23.591 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 97], 00:23:23.591 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 120], 95.00th=[ 128], 00:23:23.591 | 99.00th=[ 163], 99.50th=[ 165], 99.90th=[ 190], 99.95th=[ 190], 00:23:23.591 | 99.99th=[ 190] 00:23:23.591 bw ( KiB/s): min= 496, max= 912, per=4.04%, avg=691.37, stdev=92.81, samples=19 00:23:23.591 iops : min= 124, max= 228, avg=172.84, stdev=23.20, samples=19 00:23:23.591 lat (msec) : 2=0.33%, 4=1.41%, 10=1.52%, 20=0.81%, 50=4.60% 00:23:23.591 lat (msec) : 100=53.74%, 250=37.59% 00:23:23.591 cpu : usr=43.24%, sys=1.53%, ctx=1740, majf=0, minf=9 00:23:23.591 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=78.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:23.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 complete : 0=0.0%, 4=88.2%, 8=10.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 issued rwts: total=1846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.591 filename2: (groupid=0, jobs=1): err= 0: pid=84061: Wed Nov 20 08:56:52 2024 00:23:23.591 read: IOPS=185, BW=743KiB/s (761kB/s)(7464KiB/10048msec) 00:23:23.591 slat (usec): min=5, max=8029, avg=25.99, stdev=268.74 00:23:23.591 clat (msec): min=8, max=175, avg=85.95, stdev=30.14 00:23:23.591 lat (msec): min=8, max=175, avg=85.98, stdev=30.14 00:23:23.591 clat percentiles (msec): 00:23:23.591 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 45], 20.00th=[ 63], 00:23:23.591 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 101], 00:23:23.591 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 130], 00:23:23.591 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 176], 00:23:23.591 | 99.99th=[ 176] 00:23:23.591 bw ( KiB/s): min= 512, max= 1723, per=4.32%, avg=739.65, stdev=259.32, samples=20 00:23:23.591 iops : min= 128, max= 430, avg=184.85, stdev=64.69, samples=20 00:23:23.591 lat (msec) : 10=0.86%, 20=3.00%, 50=9.27%, 100=46.46%, 250=40.41% 00:23:23.591 cpu : usr=36.13%, sys=1.43%, ctx=1392, majf=0, minf=9 00:23:23.591 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=81.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:23.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 issued rwts: total=1866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.591 filename2: (groupid=0, jobs=1): err= 0: pid=84062: Wed Nov 20 08:56:52 2024 00:23:23.591 read: IOPS=185, BW=742KiB/s (760kB/s)(7456KiB/10047msec) 00:23:23.591 slat (usec): min=7, max=8028, avg=30.36, stdev=255.66 00:23:23.591 clat (msec): min=13, max=177, avg=85.96, stdev=29.56 00:23:23.591 lat (msec): min=13, max=177, avg=85.99, stdev=29.57 00:23:23.591 clat percentiles (msec): 00:23:23.591 | 1.00th=[ 17], 5.00th=[ 31], 10.00th=[ 49], 20.00th=[ 64], 00:23:23.591 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 99], 00:23:23.591 | 70.00th=[ 107], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 129], 00:23:23.591 | 99.00th=[ 148], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 178], 00:23:23.591 | 99.99th=[ 178] 00:23:23.591 bw ( KiB/s): min= 544, max= 1648, per=4.32%, avg=739.25, stdev=246.85, samples=20 00:23:23.591 iops : min= 136, max= 412, avg=184.80, stdev=61.72, samples=20 00:23:23.591 lat (msec) : 20=3.33%, 50=8.15%, 100=50.16%, 250=38.36% 00:23:23.591 cpu : usr=41.61%, sys=1.73%, ctx=1357, majf=0, minf=9 00:23:23.591 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:23.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.591 filename2: (groupid=0, jobs=1): err= 0: pid=84063: Wed Nov 20 08:56:52 2024 00:23:23.591 read: IOPS=183, BW=732KiB/s (750kB/s)(7348KiB/10036msec) 00:23:23.591 slat (usec): min=5, max=8033, avg=31.13, stdev=289.75 00:23:23.591 clat (msec): min=20, max=152, avg=87.22, stdev=26.72 00:23:23.591 lat (msec): min=20, max=153, avg=87.25, stdev=26.72 00:23:23.591 clat percentiles (msec): 00:23:23.591 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 51], 20.00th=[ 67], 00:23:23.591 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 97], 00:23:23.591 | 70.00th=[ 107], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 127], 00:23:23.591 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 153], 00:23:23.591 | 99.99th=[ 153] 00:23:23.591 bw ( KiB/s): min= 560, max= 1280, per=4.26%, avg=728.30, stdev=168.75, samples=20 00:23:23.591 iops : min= 140, max= 320, avg=182.05, stdev=42.19, samples=20 00:23:23.591 lat (msec) : 50=9.69%, 100=52.80%, 250=37.51% 00:23:23.591 cpu : usr=42.60%, sys=1.51%, ctx=1240, majf=0, minf=9 00:23:23.591 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:23.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 issued rwts: total=1837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.591 filename2: (groupid=0, jobs=1): err= 0: pid=84064: Wed Nov 20 08:56:52 2024 00:23:23.591 read: IOPS=174, BW=697KiB/s (713kB/s)(6972KiB/10010msec) 00:23:23.591 slat (usec): min=4, max=6072, avg=37.45, stdev=265.41 00:23:23.591 clat (msec): min=10, max=192, avg=91.69, stdev=28.08 00:23:23.591 lat (msec): min=10, max=192, avg=91.73, stdev=28.08 00:23:23.591 clat percentiles (msec): 00:23:23.591 | 1.00th=[ 27], 5.00th=[ 45], 10.00th=[ 55], 20.00th=[ 70], 00:23:23.591 | 30.00th=[ 73], 40.00th=[ 83], 50.00th=[ 96], 60.00th=[ 105], 00:23:23.591 | 70.00th=[ 109], 80.00th=[ 117], 90.00th=[ 123], 95.00th=[ 132], 00:23:23.591 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 192], 00:23:23.591 | 99.99th=[ 192] 00:23:23.591 bw ( KiB/s): min= 400, max= 1136, per=3.91%, avg=669.05, stdev=145.43, samples=19 00:23:23.591 iops : min= 100, max= 284, avg=167.26, stdev=36.36, samples=19 00:23:23.591 lat (msec) : 20=0.75%, 50=6.77%, 100=45.44%, 250=47.05% 00:23:23.591 cpu : usr=41.56%, sys=1.75%, ctx=1380, majf=0, minf=9 00:23:23.591 IO depths : 1=0.1%, 2=2.6%, 4=10.3%, 8=72.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:23:23.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 complete : 0=0.0%, 4=89.7%, 8=8.0%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 issued rwts: total=1743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.591 filename2: (groupid=0, jobs=1): err= 0: pid=84065: Wed Nov 20 08:56:52 2024 00:23:23.591 read: IOPS=165, BW=661KiB/s (677kB/s)(6636KiB/10040msec) 00:23:23.591 slat (nsec): min=4873, max=58388, avg=20523.04, stdev=10109.56 00:23:23.591 clat (msec): min=15, max=179, avg=96.60, stdev=29.30 00:23:23.591 lat (msec): min=15, max=179, avg=96.62, stdev=29.29 00:23:23.591 clat percentiles (msec): 00:23:23.591 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 55], 20.00th=[ 71], 00:23:23.591 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 109], 00:23:23.591 | 70.00th=[ 114], 80.00th=[ 121], 90.00th=[ 130], 95.00th=[ 132], 00:23:23.591 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 180], 00:23:23.591 | 99.99th=[ 180] 00:23:23.591 bw ( KiB/s): min= 512, max= 1520, per=3.84%, avg=657.20, stdev=225.08, samples=20 00:23:23.591 iops : min= 128, max= 380, avg=164.30, stdev=56.27, samples=20 00:23:23.591 lat (msec) : 20=0.96%, 50=6.33%, 100=34.96%, 250=57.75% 00:23:23.591 cpu : usr=35.47%, sys=1.50%, ctx=1029, majf=0, minf=9 00:23:23.591 IO depths : 1=0.1%, 2=2.6%, 4=10.5%, 8=71.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:23:23.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 complete : 0=0.0%, 4=90.8%, 8=6.9%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.591 issued rwts: total=1659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:23.591 00:23:23.591 Run status group 0 (all jobs): 00:23:23.591 READ: bw=16.7MiB/s (17.5MB/s), 628KiB/s-763KiB/s (643kB/s-782kB/s), io=168MiB (176MB), run=10002-10053msec 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:23.591 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.592 bdev_null0 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.592 [2024-11-20 08:56:53.114986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.592 bdev_null1 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:23.592 { 00:23:23.592 "params": { 00:23:23.592 "name": "Nvme$subsystem", 00:23:23.592 "trtype": "$TEST_TRANSPORT", 00:23:23.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.592 "adrfam": "ipv4", 00:23:23.592 "trsvcid": "$NVMF_PORT", 00:23:23.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.592 "hdgst": ${hdgst:-false}, 00:23:23.592 "ddgst": ${ddgst:-false} 00:23:23.592 }, 00:23:23.592 "method": "bdev_nvme_attach_controller" 00:23:23.592 } 00:23:23.592 EOF 00:23:23.592 )") 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:23.592 08:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:23.592 { 00:23:23.592 "params": { 00:23:23.592 "name": "Nvme$subsystem", 00:23:23.592 "trtype": "$TEST_TRANSPORT", 00:23:23.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.592 "adrfam": "ipv4", 00:23:23.592 "trsvcid": "$NVMF_PORT", 00:23:23.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.592 "hdgst": ${hdgst:-false}, 00:23:23.592 "ddgst": ${ddgst:-false} 00:23:23.592 }, 00:23:23.593 "method": "bdev_nvme_attach_controller" 00:23:23.593 } 00:23:23.593 EOF 00:23:23.593 )") 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:23.593 "params": { 00:23:23.593 "name": "Nvme0", 00:23:23.593 "trtype": "tcp", 00:23:23.593 "traddr": "10.0.0.3", 00:23:23.593 "adrfam": "ipv4", 00:23:23.593 "trsvcid": "4420", 00:23:23.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:23.593 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:23.593 "hdgst": false, 00:23:23.593 "ddgst": false 00:23:23.593 }, 00:23:23.593 "method": "bdev_nvme_attach_controller" 00:23:23.593 },{ 00:23:23.593 "params": { 00:23:23.593 "name": "Nvme1", 00:23:23.593 "trtype": "tcp", 00:23:23.593 "traddr": "10.0.0.3", 00:23:23.593 "adrfam": "ipv4", 00:23:23.593 "trsvcid": "4420", 00:23:23.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.593 "hdgst": false, 00:23:23.593 "ddgst": false 00:23:23.593 }, 00:23:23.593 "method": "bdev_nvme_attach_controller" 00:23:23.593 }' 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:23.593 08:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:23.593 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:23.593 ... 00:23:23.593 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:23.593 ... 00:23:23.593 fio-3.35 00:23:23.593 Starting 4 threads 00:23:28.862 00:23:28.862 filename0: (groupid=0, jobs=1): err= 0: pid=84206: Wed Nov 20 08:56:59 2024 00:23:28.862 read: IOPS=2013, BW=15.7MiB/s (16.5MB/s)(78.7MiB/5001msec) 00:23:28.862 slat (usec): min=5, max=265, avg=14.54, stdev= 6.32 00:23:28.862 clat (usec): min=1156, max=10426, avg=3933.13, stdev=888.44 00:23:28.862 lat (usec): min=1164, max=10443, avg=3947.68, stdev=886.76 00:23:28.862 clat percentiles (usec): 00:23:28.862 | 1.00th=[ 2933], 5.00th=[ 3032], 10.00th=[ 3097], 20.00th=[ 3261], 00:23:28.862 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3720], 00:23:28.862 | 70.00th=[ 4178], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5538], 00:23:28.862 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 7635], 99.95th=[ 7898], 00:23:28.862 | 99.99th=[ 8455] 00:23:28.862 bw ( KiB/s): min=13456, max=17296, per=24.69%, avg=15923.56, stdev=1395.09, samples=9 00:23:28.862 iops : min= 1682, max= 2162, avg=1990.44, stdev=174.39, samples=9 00:23:28.862 lat (msec) : 2=0.30%, 4=66.16%, 10=33.53%, 20=0.01% 00:23:28.862 cpu : usr=91.06%, sys=7.72%, ctx=66, majf=0, minf=1 00:23:28.862 IO depths : 1=0.1%, 2=0.2%, 4=71.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.862 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.862 issued rwts: total=10072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:28.862 filename0: (groupid=0, jobs=1): err= 0: pid=84207: Wed Nov 20 08:56:59 2024 00:23:28.862 read: IOPS=2016, BW=15.8MiB/s (16.5MB/s)(78.8MiB/5001msec) 00:23:28.862 slat (nsec): min=6621, max=58008, avg=11877.14, stdev=4983.40 00:23:28.862 clat (usec): min=1203, max=7935, avg=3935.69, stdev=869.28 00:23:28.862 lat (usec): min=1211, max=7950, avg=3947.56, stdev=869.42 00:23:28.862 clat percentiles (usec): 00:23:28.862 | 1.00th=[ 2933], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3261], 00:23:28.862 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3556], 60.00th=[ 3720], 00:23:28.862 | 70.00th=[ 4178], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5538], 00:23:28.862 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7373], 99.95th=[ 7439], 00:23:28.862 | 99.99th=[ 7832] 00:23:28.862 bw ( KiB/s): min=13808, max=17296, per=24.75%, avg=15966.22, stdev=1331.55, samples=9 00:23:28.862 iops : min= 1726, max= 2162, avg=1995.78, stdev=166.44, samples=9 00:23:28.862 lat (msec) : 2=0.15%, 4=66.22%, 10=33.63% 00:23:28.862 cpu : usr=91.70%, sys=7.44%, ctx=11, majf=0, minf=0 00:23:28.862 IO depths : 1=0.1%, 2=0.2%, 4=71.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.862 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.862 issued rwts: total=10084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:28.862 filename1: (groupid=0, jobs=1): err= 0: pid=84208: Wed Nov 20 08:56:59 2024 00:23:28.862 read: IOPS=2017, BW=15.8MiB/s (16.5MB/s)(78.9MiB/5003msec) 00:23:28.862 slat (nsec): min=6519, max=59437, avg=10052.95, stdev=3896.91 00:23:28.862 clat (usec): min=1466, max=8591, avg=3935.65, stdev=885.20 00:23:28.862 lat (usec): min=1474, max=8598, avg=3945.70, stdev=884.49 00:23:28.862 clat percentiles (usec): 00:23:28.862 | 1.00th=[ 2933], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3261], 00:23:28.862 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3720], 00:23:28.862 | 70.00th=[ 4178], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5538], 00:23:28.862 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7439], 99.95th=[ 7570], 00:23:28.862 | 99.99th=[ 8455] 00:23:28.862 bw ( KiB/s): min=14000, max=17312, per=24.79%, avg=15987.56, stdev=1277.32, samples=9 00:23:28.862 iops : min= 1750, max= 2164, avg=1998.44, stdev=159.66, samples=9 00:23:28.862 lat (msec) : 2=0.24%, 4=66.32%, 10=33.44% 00:23:28.862 cpu : usr=91.18%, sys=7.96%, ctx=6, majf=0, minf=0 00:23:28.862 IO depths : 1=0.1%, 2=0.2%, 4=71.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.862 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.862 issued rwts: total=10096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:28.862 filename1: (groupid=0, jobs=1): err= 0: pid=84209: Wed Nov 20 08:56:59 2024 00:23:28.862 read: IOPS=2016, BW=15.8MiB/s (16.5MB/s)(78.8MiB/5001msec) 00:23:28.862 slat (nsec): min=3731, max=81919, avg=14791.01, stdev=4886.08 00:23:28.862 clat (usec): min=938, max=10394, avg=3930.32, stdev=888.76 00:23:28.862 lat (usec): min=948, max=10417, avg=3945.11, stdev=888.45 00:23:28.862 clat percentiles (usec): 00:23:28.862 | 1.00th=[ 2933], 5.00th=[ 3032], 10.00th=[ 3097], 20.00th=[ 3261], 00:23:28.862 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3720], 00:23:28.862 | 70.00th=[ 4178], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5538], 00:23:28.862 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7635], 99.95th=[ 7832], 00:23:28.862 | 99.99th=[ 8586] 00:23:28.862 bw ( KiB/s): min=13482, max=17296, per=24.70%, avg=15931.78, stdev=1392.46, samples=9 00:23:28.862 iops : min= 1685, max= 2162, avg=1991.44, stdev=174.11, samples=9 00:23:28.862 lat (usec) : 1000=0.03% 00:23:28.862 lat (msec) : 2=0.35%, 4=66.12%, 10=33.49%, 20=0.01% 00:23:28.862 cpu : usr=91.26%, sys=7.88%, ctx=9, majf=0, minf=0 00:23:28.862 IO depths : 1=0.1%, 2=0.3%, 4=71.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.862 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.862 issued rwts: total=10083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:28.862 00:23:28.862 Run status group 0 (all jobs): 00:23:28.862 READ: bw=63.0MiB/s (66.0MB/s), 15.7MiB/s-15.8MiB/s (16.5MB/s-16.5MB/s), io=315MiB (330MB), run=5001-5003msec 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.862 ************************************ 00:23:28.862 END TEST fio_dif_rand_params 00:23:28.862 ************************************ 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.862 00:23:28.862 real 0m24.085s 00:23:28.862 user 2m6.232s 00:23:28.862 sys 0m7.038s 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.862 08:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.862 08:56:59 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:28.862 08:56:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:28.862 08:56:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.862 08:56:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:28.862 ************************************ 00:23:28.862 START TEST fio_dif_digest 00:23:28.862 ************************************ 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:28.862 bdev_null0 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.862 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:28.863 [2024-11-20 08:56:59.489484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.863 { 00:23:28.863 "params": { 00:23:28.863 "name": "Nvme$subsystem", 00:23:28.863 "trtype": "$TEST_TRANSPORT", 00:23:28.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.863 "adrfam": "ipv4", 00:23:28.863 "trsvcid": "$NVMF_PORT", 00:23:28.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.863 "hdgst": ${hdgst:-false}, 00:23:28.863 "ddgst": ${ddgst:-false} 00:23:28.863 }, 00:23:28.863 "method": "bdev_nvme_attach_controller" 00:23:28.863 } 00:23:28.863 EOF 00:23:28.863 )") 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:28.863 "params": { 00:23:28.863 "name": "Nvme0", 00:23:28.863 "trtype": "tcp", 00:23:28.863 "traddr": "10.0.0.3", 00:23:28.863 "adrfam": "ipv4", 00:23:28.863 "trsvcid": "4420", 00:23:28.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:28.863 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:28.863 "hdgst": true, 00:23:28.863 "ddgst": true 00:23:28.863 }, 00:23:28.863 "method": "bdev_nvme_attach_controller" 00:23:28.863 }' 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:28.863 08:56:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:28.863 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:28.863 ... 00:23:28.863 fio-3.35 00:23:28.863 Starting 3 threads 00:23:41.087 00:23:41.087 filename0: (groupid=0, jobs=1): err= 0: pid=84315: Wed Nov 20 08:57:10 2024 00:23:41.087 read: IOPS=227, BW=28.5MiB/s (29.8MB/s)(285MiB/10001msec) 00:23:41.087 slat (nsec): min=6961, max=40708, avg=10517.81, stdev=4154.16 00:23:41.087 clat (usec): min=11918, max=15754, avg=13150.86, stdev=561.63 00:23:41.087 lat (usec): min=11927, max=15769, avg=13161.38, stdev=561.74 00:23:41.087 clat percentiles (usec): 00:23:41.087 | 1.00th=[12125], 5.00th=[12256], 10.00th=[12387], 20.00th=[12649], 00:23:41.087 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:23:41.087 | 70.00th=[13566], 80.00th=[13566], 90.00th=[13829], 95.00th=[13960], 00:23:41.087 | 99.00th=[14877], 99.50th=[15139], 99.90th=[15795], 99.95th=[15795], 00:23:41.087 | 99.99th=[15795] 00:23:41.087 bw ( KiB/s): min=28416, max=30720, per=33.38%, avg=29184.00, stdev=677.31, samples=19 00:23:41.087 iops : min= 222, max= 240, avg=228.00, stdev= 5.29, samples=19 00:23:41.087 lat (msec) : 20=100.00% 00:23:41.087 cpu : usr=90.98%, sys=8.38%, ctx=10, majf=0, minf=0 00:23:41.087 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:41.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:41.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:41.087 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:41.087 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:41.087 filename0: (groupid=0, jobs=1): err= 0: pid=84316: Wed Nov 20 08:57:10 2024 00:23:41.087 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(285MiB/10009msec) 00:23:41.087 slat (nsec): min=7265, max=54196, avg=14984.45, stdev=4887.52 00:23:41.087 clat (usec): min=9189, max=15616, avg=13136.47, stdev=586.37 00:23:41.087 lat (usec): min=9203, max=15633, avg=13151.45, stdev=586.43 00:23:41.087 clat percentiles (usec): 00:23:41.087 | 1.00th=[12125], 5.00th=[12256], 10.00th=[12387], 20.00th=[12649], 00:23:41.087 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:23:41.087 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13829], 95.00th=[13960], 00:23:41.087 | 99.00th=[14746], 99.50th=[15139], 99.90th=[15533], 99.95th=[15664], 00:23:41.087 | 99.99th=[15664] 00:23:41.087 bw ( KiB/s): min=28416, max=30720, per=33.38%, avg=29184.00, stdev=677.31, samples=19 00:23:41.087 iops : min= 222, max= 240, avg=228.00, stdev= 5.29, samples=19 00:23:41.087 lat (msec) : 10=0.26%, 20=99.74% 00:23:41.087 cpu : usr=91.46%, sys=7.97%, ctx=9, majf=0, minf=0 00:23:41.087 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:41.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:41.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:41.087 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:41.087 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:41.087 filename0: (groupid=0, jobs=1): err= 0: pid=84317: Wed Nov 20 08:57:10 2024 00:23:41.087 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(285MiB/10009msec) 00:23:41.087 slat (nsec): min=7217, max=75993, avg=15959.56, stdev=5199.76 00:23:41.087 clat (usec): min=9121, max=15620, avg=13132.72, stdev=586.84 00:23:41.087 lat (usec): min=9135, max=15637, avg=13148.67, stdev=586.80 00:23:41.087 clat percentiles (usec): 00:23:41.087 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12387], 20.00th=[12649], 00:23:41.087 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:23:41.087 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13829], 95.00th=[13960], 00:23:41.087 | 99.00th=[14746], 99.50th=[15139], 99.90th=[15533], 99.95th=[15664], 00:23:41.087 | 99.99th=[15664] 00:23:41.087 bw ( KiB/s): min=28416, max=30720, per=33.38%, avg=29184.00, stdev=677.31, samples=19 00:23:41.087 iops : min= 222, max= 240, avg=228.00, stdev= 5.29, samples=19 00:23:41.087 lat (msec) : 10=0.26%, 20=99.74% 00:23:41.087 cpu : usr=91.29%, sys=8.07%, ctx=17, majf=0, minf=0 00:23:41.087 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:41.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:41.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:41.087 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:41.087 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:41.087 00:23:41.087 Run status group 0 (all jobs): 00:23:41.087 READ: bw=85.4MiB/s (89.5MB/s), 28.5MiB/s-28.5MiB/s (29.8MB/s-29.9MB/s), io=855MiB (896MB), run=10001-10009msec 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.087 00:23:41.087 real 0m11.224s 00:23:41.087 user 0m28.194s 00:23:41.087 sys 0m2.782s 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:41.087 ************************************ 00:23:41.087 END TEST fio_dif_digest 00:23:41.087 ************************************ 00:23:41.087 08:57:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:41.087 08:57:10 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:41.087 08:57:10 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:41.087 08:57:10 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.087 08:57:10 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:23:41.087 08:57:10 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.087 08:57:10 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:23:41.087 08:57:10 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.087 08:57:10 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.087 rmmod nvme_tcp 00:23:41.087 rmmod nvme_fabrics 00:23:41.087 rmmod nvme_keyring 00:23:41.087 08:57:10 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.087 08:57:10 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:23:41.087 08:57:10 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:23:41.087 08:57:10 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83557 ']' 00:23:41.087 08:57:10 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83557 00:23:41.087 08:57:10 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83557 ']' 00:23:41.087 08:57:10 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83557 00:23:41.087 08:57:10 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:23:41.087 08:57:10 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.087 08:57:10 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83557 00:23:41.087 08:57:10 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:41.088 08:57:10 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:41.088 killing process with pid 83557 00:23:41.088 08:57:10 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83557' 00:23:41.088 08:57:10 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83557 00:23:41.088 08:57:10 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83557 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:41.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:41.088 Waiting for block devices as requested 00:23:41.088 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:41.088 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.088 08:57:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:41.088 08:57:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.088 08:57:11 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:23:41.088 00:23:41.088 real 1m0.702s 00:23:41.088 user 3m49.861s 00:23:41.088 sys 0m19.237s 00:23:41.088 08:57:11 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:41.088 08:57:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:41.088 ************************************ 00:23:41.088 END TEST nvmf_dif 00:23:41.088 ************************************ 00:23:41.347 08:57:12 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:41.347 08:57:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:41.347 08:57:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:41.347 08:57:12 -- common/autotest_common.sh@10 -- # set +x 00:23:41.347 ************************************ 00:23:41.347 START TEST nvmf_abort_qd_sizes 00:23:41.347 ************************************ 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:41.347 * Looking for test storage... 00:23:41.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:41.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.347 --rc genhtml_branch_coverage=1 00:23:41.347 --rc genhtml_function_coverage=1 00:23:41.347 --rc genhtml_legend=1 00:23:41.347 --rc geninfo_all_blocks=1 00:23:41.347 --rc geninfo_unexecuted_blocks=1 00:23:41.347 00:23:41.347 ' 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:41.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.347 --rc genhtml_branch_coverage=1 00:23:41.347 --rc genhtml_function_coverage=1 00:23:41.347 --rc genhtml_legend=1 00:23:41.347 --rc geninfo_all_blocks=1 00:23:41.347 --rc geninfo_unexecuted_blocks=1 00:23:41.347 00:23:41.347 ' 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:41.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.347 --rc genhtml_branch_coverage=1 00:23:41.347 --rc genhtml_function_coverage=1 00:23:41.347 --rc genhtml_legend=1 00:23:41.347 --rc geninfo_all_blocks=1 00:23:41.347 --rc geninfo_unexecuted_blocks=1 00:23:41.347 00:23:41.347 ' 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:41.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.347 --rc genhtml_branch_coverage=1 00:23:41.347 --rc genhtml_function_coverage=1 00:23:41.347 --rc genhtml_legend=1 00:23:41.347 --rc geninfo_all_blocks=1 00:23:41.347 --rc geninfo_unexecuted_blocks=1 00:23:41.347 00:23:41.347 ' 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:41.347 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:41.348 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:41.348 Cannot find device "nvmf_init_br" 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:41.348 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:41.607 Cannot find device "nvmf_init_br2" 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:41.607 Cannot find device "nvmf_tgt_br" 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:41.607 Cannot find device "nvmf_tgt_br2" 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:41.607 Cannot find device "nvmf_init_br" 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:41.607 Cannot find device "nvmf_init_br2" 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:41.607 Cannot find device "nvmf_tgt_br" 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:41.607 Cannot find device "nvmf_tgt_br2" 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:41.607 Cannot find device "nvmf_br" 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:41.607 Cannot find device "nvmf_init_if" 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:41.607 Cannot find device "nvmf_init_if2" 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:41.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:41.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:41.607 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:41.865 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:41.865 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:41.865 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:41.865 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:41.865 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:41.866 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:41.866 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:41.866 00:23:41.866 --- 10.0.0.3 ping statistics --- 00:23:41.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.866 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:41.866 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:41.866 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:23:41.866 00:23:41.866 --- 10.0.0.4 ping statistics --- 00:23:41.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.866 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:41.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:41.866 00:23:41.866 --- 10.0.0.1 ping statistics --- 00:23:41.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.866 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:41.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:23:41.866 00:23:41.866 --- 10.0.0.2 ping statistics --- 00:23:41.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.866 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:23:41.866 08:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:42.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:42.693 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:42.694 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84969 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84969 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84969 ']' 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.694 08:57:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:42.954 [2024-11-20 08:57:13.613460] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:23:42.954 [2024-11-20 08:57:13.613562] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.954 [2024-11-20 08:57:13.765627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.954 [2024-11-20 08:57:13.839456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.954 [2024-11-20 08:57:13.839529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.954 [2024-11-20 08:57:13.839556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.954 [2024-11-20 08:57:13.839567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.954 [2024-11-20 08:57:13.839577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.954 [2024-11-20 08:57:13.841150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.954 [2024-11-20 08:57:13.841262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.954 [2024-11-20 08:57:13.841396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.954 [2024-11-20 08:57:13.841403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.213 [2024-11-20 08:57:13.920545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:23:43.213 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.214 08:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:43.214 ************************************ 00:23:43.214 START TEST spdk_target_abort 00:23:43.214 ************************************ 00:23:43.214 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:23:43.214 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:43.214 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:43.214 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.214 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:43.473 spdk_targetn1 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:43.473 [2024-11-20 08:57:14.168216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:43.473 [2024-11-20 08:57:14.213403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:43.473 08:57:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:46.760 Initializing NVMe Controllers 00:23:46.760 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:46.760 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:46.760 Initialization complete. Launching workers. 00:23:46.760 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8937, failed: 0 00:23:46.760 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1030, failed to submit 7907 00:23:46.760 success 820, unsuccessful 210, failed 0 00:23:46.760 08:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:46.760 08:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:50.047 Initializing NVMe Controllers 00:23:50.047 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:50.047 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:50.047 Initialization complete. Launching workers. 00:23:50.047 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9001, failed: 0 00:23:50.047 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1171, failed to submit 7830 00:23:50.047 success 351, unsuccessful 820, failed 0 00:23:50.047 08:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:50.047 08:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:53.336 Initializing NVMe Controllers 00:23:53.336 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:53.336 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:53.336 Initialization complete. Launching workers. 00:23:53.337 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29247, failed: 0 00:23:53.337 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2314, failed to submit 26933 00:23:53.337 success 390, unsuccessful 1924, failed 0 00:23:53.337 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:53.337 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.337 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:53.337 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.337 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:53.337 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.337 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84969 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84969 ']' 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84969 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84969 00:23:53.906 killing process with pid 84969 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84969' 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84969 00:23:53.906 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84969 00:23:54.166 ************************************ 00:23:54.166 END TEST spdk_target_abort 00:23:54.166 ************************************ 00:23:54.166 00:23:54.166 real 0m10.795s 00:23:54.166 user 0m41.661s 00:23:54.166 sys 0m1.941s 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:54.166 08:57:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:54.166 08:57:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:54.166 08:57:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.166 08:57:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:54.166 ************************************ 00:23:54.166 START TEST kernel_target_abort 00:23:54.166 ************************************ 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:54.166 08:57:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:54.426 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:54.426 Waiting for block devices as requested 00:23:54.685 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:54.685 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:54.685 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:54.685 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:54.685 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:54.685 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:54.685 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:54.685 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:54.685 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:54.685 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:54.685 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:54.944 No valid GPT data, bailing 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:54.944 No valid GPT data, bailing 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:54.944 No valid GPT data, bailing 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:54.944 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:55.204 No valid GPT data, bailing 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb --hostid=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb -a 10.0.0.1 -t tcp -s 4420 00:23:55.204 00:23:55.204 Discovery Log Number of Records 2, Generation counter 2 00:23:55.204 =====Discovery Log Entry 0====== 00:23:55.204 trtype: tcp 00:23:55.204 adrfam: ipv4 00:23:55.204 subtype: current discovery subsystem 00:23:55.204 treq: not specified, sq flow control disable supported 00:23:55.204 portid: 1 00:23:55.204 trsvcid: 4420 00:23:55.204 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:55.204 traddr: 10.0.0.1 00:23:55.204 eflags: none 00:23:55.204 sectype: none 00:23:55.204 =====Discovery Log Entry 1====== 00:23:55.204 trtype: tcp 00:23:55.204 adrfam: ipv4 00:23:55.204 subtype: nvme subsystem 00:23:55.204 treq: not specified, sq flow control disable supported 00:23:55.204 portid: 1 00:23:55.204 trsvcid: 4420 00:23:55.204 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:55.204 traddr: 10.0.0.1 00:23:55.204 eflags: none 00:23:55.204 sectype: none 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:55.204 08:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:58.493 Initializing NVMe Controllers 00:23:58.493 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:58.493 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:58.493 Initialization complete. Launching workers. 00:23:58.493 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32436, failed: 0 00:23:58.493 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32436, failed to submit 0 00:23:58.493 success 0, unsuccessful 32436, failed 0 00:23:58.493 08:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:58.493 08:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:01.785 Initializing NVMe Controllers 00:24:01.785 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:01.785 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:01.785 Initialization complete. Launching workers. 00:24:01.785 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67238, failed: 0 00:24:01.785 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29210, failed to submit 38028 00:24:01.785 success 0, unsuccessful 29210, failed 0 00:24:01.785 08:57:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:01.785 08:57:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:05.074 Initializing NVMe Controllers 00:24:05.074 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:05.074 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:05.074 Initialization complete. Launching workers. 00:24:05.074 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78252, failed: 0 00:24:05.074 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19514, failed to submit 58738 00:24:05.074 success 0, unsuccessful 19514, failed 0 00:24:05.074 08:57:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:05.074 08:57:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:05.074 08:57:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:24:05.074 08:57:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:05.074 08:57:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:05.074 08:57:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:05.074 08:57:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:05.074 08:57:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:05.074 08:57:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:05.074 08:57:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:05.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:07.548 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:07.548 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:07.548 ************************************ 00:24:07.548 END TEST kernel_target_abort 00:24:07.548 ************************************ 00:24:07.548 00:24:07.548 real 0m13.199s 00:24:07.548 user 0m6.358s 00:24:07.548 sys 0m4.292s 00:24:07.548 08:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.548 08:57:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:07.548 rmmod nvme_tcp 00:24:07.548 rmmod nvme_fabrics 00:24:07.548 rmmod nvme_keyring 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:07.548 Process with pid 84969 is not found 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84969 ']' 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84969 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84969 ']' 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84969 00:24:07.548 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84969) - No such process 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84969 is not found' 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:24:07.548 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:07.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:07.807 Waiting for block devices as requested 00:24:07.807 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:08.075 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:08.075 08:57:38 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:08.342 08:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:08.342 08:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:08.342 08:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:08.342 08:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.342 08:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:08.342 08:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.342 08:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:24:08.342 00:24:08.342 real 0m27.075s 00:24:08.342 user 0m49.143s 00:24:08.342 sys 0m7.620s 00:24:08.342 08:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.342 08:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:08.342 ************************************ 00:24:08.342 END TEST nvmf_abort_qd_sizes 00:24:08.342 ************************************ 00:24:08.342 08:57:39 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:08.342 08:57:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:08.342 08:57:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.342 08:57:39 -- common/autotest_common.sh@10 -- # set +x 00:24:08.342 ************************************ 00:24:08.342 START TEST keyring_file 00:24:08.342 ************************************ 00:24:08.342 08:57:39 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:08.342 * Looking for test storage... 00:24:08.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:08.342 08:57:39 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:08.342 08:57:39 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:24:08.342 08:57:39 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:08.601 08:57:39 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@345 -- # : 1 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@353 -- # local d=1 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@355 -- # echo 1 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@353 -- # local d=2 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@355 -- # echo 2 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.601 08:57:39 keyring_file -- scripts/common.sh@368 -- # return 0 00:24:08.601 08:57:39 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.601 08:57:39 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:08.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.601 --rc genhtml_branch_coverage=1 00:24:08.601 --rc genhtml_function_coverage=1 00:24:08.601 --rc genhtml_legend=1 00:24:08.601 --rc geninfo_all_blocks=1 00:24:08.601 --rc geninfo_unexecuted_blocks=1 00:24:08.601 00:24:08.601 ' 00:24:08.601 08:57:39 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:08.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.601 --rc genhtml_branch_coverage=1 00:24:08.601 --rc genhtml_function_coverage=1 00:24:08.601 --rc genhtml_legend=1 00:24:08.601 --rc geninfo_all_blocks=1 00:24:08.601 --rc geninfo_unexecuted_blocks=1 00:24:08.601 00:24:08.601 ' 00:24:08.601 08:57:39 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:08.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.601 --rc genhtml_branch_coverage=1 00:24:08.601 --rc genhtml_function_coverage=1 00:24:08.601 --rc genhtml_legend=1 00:24:08.601 --rc geninfo_all_blocks=1 00:24:08.601 --rc geninfo_unexecuted_blocks=1 00:24:08.601 00:24:08.601 ' 00:24:08.601 08:57:39 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:08.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.601 --rc genhtml_branch_coverage=1 00:24:08.601 --rc genhtml_function_coverage=1 00:24:08.601 --rc genhtml_legend=1 00:24:08.601 --rc geninfo_all_blocks=1 00:24:08.601 --rc geninfo_unexecuted_blocks=1 00:24:08.601 00:24:08.601 ' 00:24:08.601 08:57:39 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:08.602 08:57:39 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.602 08:57:39 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.602 08:57:39 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.602 08:57:39 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.602 08:57:39 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.602 08:57:39 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.602 08:57:39 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.602 08:57:39 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:08.602 08:57:39 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@51 -- # : 0 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.602 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uOOBWUy3mx 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uOOBWUy3mx 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uOOBWUy3mx 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uOOBWUy3mx 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Qjc0torqfE 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:08.602 08:57:39 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Qjc0torqfE 00:24:08.602 08:57:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Qjc0torqfE 00:24:08.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Qjc0torqfE 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@30 -- # tgtpid=85872 00:24:08.602 08:57:39 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85872 00:24:08.602 08:57:39 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85872 ']' 00:24:08.602 08:57:39 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.602 08:57:39 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.602 08:57:39 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.602 08:57:39 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.602 08:57:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:08.861 [2024-11-20 08:57:39.522009] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:24:08.861 [2024-11-20 08:57:39.522291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85872 ] 00:24:08.861 [2024-11-20 08:57:39.663711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.861 [2024-11-20 08:57:39.738060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.120 [2024-11-20 08:57:39.839696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:09.380 08:57:40 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:09.380 [2024-11-20 08:57:40.099531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.380 null0 00:24:09.380 [2024-11-20 08:57:40.131483] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.380 [2024-11-20 08:57:40.131900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.380 08:57:40 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:09.380 [2024-11-20 08:57:40.159482] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:09.380 request: 00:24:09.380 { 00:24:09.380 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:09.380 "secure_channel": false, 00:24:09.380 "listen_address": { 00:24:09.380 "trtype": "tcp", 00:24:09.380 "traddr": "127.0.0.1", 00:24:09.380 "trsvcid": "4420" 00:24:09.380 }, 00:24:09.380 "method": "nvmf_subsystem_add_listener", 00:24:09.380 "req_id": 1 00:24:09.380 } 00:24:09.380 Got JSON-RPC error response 00:24:09.380 response: 00:24:09.380 { 00:24:09.380 "code": -32602, 00:24:09.380 "message": "Invalid parameters" 00:24:09.380 } 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:09.380 08:57:40 keyring_file -- keyring/file.sh@47 -- # bperfpid=85883 00:24:09.380 08:57:40 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:09.380 08:57:40 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85883 /var/tmp/bperf.sock 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85883 ']' 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:09.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.380 08:57:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:09.380 [2024-11-20 08:57:40.227882] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:24:09.380 [2024-11-20 08:57:40.228187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85883 ] 00:24:09.639 [2024-11-20 08:57:40.371541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.639 [2024-11-20 08:57:40.429439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.639 [2024-11-20 08:57:40.505560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:09.898 08:57:40 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.898 08:57:40 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:09.898 08:57:40 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uOOBWUy3mx 00:24:09.898 08:57:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uOOBWUy3mx 00:24:10.157 08:57:40 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Qjc0torqfE 00:24:10.157 08:57:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Qjc0torqfE 00:24:10.416 08:57:41 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:24:10.416 08:57:41 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:10.416 08:57:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:10.416 08:57:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:10.416 08:57:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:10.675 08:57:41 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.uOOBWUy3mx == \/\t\m\p\/\t\m\p\.\u\O\O\B\W\U\y\3\m\x ]] 00:24:10.675 08:57:41 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:24:10.675 08:57:41 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:24:10.675 08:57:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:10.675 08:57:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:10.675 08:57:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:10.934 08:57:41 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Qjc0torqfE == \/\t\m\p\/\t\m\p\.\Q\j\c\0\t\o\r\q\f\E ]] 00:24:10.934 08:57:41 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:24:10.934 08:57:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:10.934 08:57:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:10.934 08:57:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:10.934 08:57:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:10.934 08:57:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:11.193 08:57:42 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:11.193 08:57:42 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:24:11.193 08:57:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:11.193 08:57:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:11.193 08:57:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:11.193 08:57:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.193 08:57:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:11.452 08:57:42 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:24:11.452 08:57:42 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:11.452 08:57:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:11.710 [2024-11-20 08:57:42.521481] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.710 nvme0n1 00:24:11.710 08:57:42 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:24:11.710 08:57:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:11.710 08:57:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:11.710 08:57:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:11.710 08:57:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:11.710 08:57:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.969 08:57:42 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:24:11.969 08:57:42 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:24:11.969 08:57:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:11.969 08:57:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:11.969 08:57:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:11.969 08:57:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.969 08:57:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:12.227 08:57:43 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:24:12.228 08:57:43 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:12.486 Running I/O for 1 seconds... 00:24:13.423 11499.00 IOPS, 44.92 MiB/s 00:24:13.423 Latency(us) 00:24:13.423 [2024-11-20T08:57:44.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.423 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:13.423 nvme0n1 : 1.01 11551.91 45.12 0.00 0.00 11051.88 4200.26 16324.42 00:24:13.423 [2024-11-20T08:57:44.338Z] =================================================================================================================== 00:24:13.423 [2024-11-20T08:57:44.338Z] Total : 11551.91 45.12 0.00 0.00 11051.88 4200.26 16324.42 00:24:13.423 { 00:24:13.423 "results": [ 00:24:13.423 { 00:24:13.423 "job": "nvme0n1", 00:24:13.423 "core_mask": "0x2", 00:24:13.423 "workload": "randrw", 00:24:13.423 "percentage": 50, 00:24:13.423 "status": "finished", 00:24:13.423 "queue_depth": 128, 00:24:13.423 "io_size": 4096, 00:24:13.423 "runtime": 1.006673, 00:24:13.423 "iops": 11551.914077361766, 00:24:13.423 "mibps": 45.1246643646944, 00:24:13.423 "io_failed": 0, 00:24:13.423 "io_timeout": 0, 00:24:13.423 "avg_latency_us": 11051.87696589248, 00:24:13.423 "min_latency_us": 4200.261818181818, 00:24:13.423 "max_latency_us": 16324.421818181818 00:24:13.424 } 00:24:13.424 ], 00:24:13.424 "core_count": 1 00:24:13.424 } 00:24:13.424 08:57:44 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:13.424 08:57:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:13.683 08:57:44 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:24:13.683 08:57:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:13.683 08:57:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:13.683 08:57:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:13.683 08:57:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:13.683 08:57:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:14.252 08:57:44 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:14.252 08:57:44 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:24:14.252 08:57:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:14.252 08:57:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:14.252 08:57:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:14.252 08:57:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:14.252 08:57:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.512 08:57:45 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:24:14.512 08:57:45 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:14.512 08:57:45 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:14.512 08:57:45 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:14.512 08:57:45 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:14.512 08:57:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.512 08:57:45 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:14.512 08:57:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.512 08:57:45 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:14.512 08:57:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:14.512 [2024-11-20 08:57:45.416629] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:14.512 [2024-11-20 08:57:45.417120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131d770 (107): Transport endpoint is not connected 00:24:14.512 [2024-11-20 08:57:45.418108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131d770 (9): Bad file descriptor 00:24:14.512 [2024-11-20 08:57:45.419099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:24:14.512 [2024-11-20 08:57:45.419132] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:14.512 [2024-11-20 08:57:45.419143] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:14.512 [2024-11-20 08:57:45.419155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:24:14.512 request: 00:24:14.512 { 00:24:14.512 "name": "nvme0", 00:24:14.512 "trtype": "tcp", 00:24:14.512 "traddr": "127.0.0.1", 00:24:14.512 "adrfam": "ipv4", 00:24:14.512 "trsvcid": "4420", 00:24:14.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:14.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:14.512 "prchk_reftag": false, 00:24:14.512 "prchk_guard": false, 00:24:14.512 "hdgst": false, 00:24:14.512 "ddgst": false, 00:24:14.512 "psk": "key1", 00:24:14.512 "allow_unrecognized_csi": false, 00:24:14.512 "method": "bdev_nvme_attach_controller", 00:24:14.512 "req_id": 1 00:24:14.512 } 00:24:14.512 Got JSON-RPC error response 00:24:14.512 response: 00:24:14.512 { 00:24:14.512 "code": -5, 00:24:14.512 "message": "Input/output error" 00:24:14.512 } 00:24:14.772 08:57:45 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:14.772 08:57:45 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:14.772 08:57:45 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:14.772 08:57:45 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:14.772 08:57:45 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:24:14.772 08:57:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:14.772 08:57:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:14.772 08:57:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:14.772 08:57:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.772 08:57:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:15.031 08:57:45 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:15.031 08:57:45 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:24:15.031 08:57:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:15.031 08:57:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:15.031 08:57:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:15.031 08:57:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.031 08:57:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:15.290 08:57:45 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:24:15.290 08:57:45 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:24:15.290 08:57:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:15.549 08:57:46 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:24:15.549 08:57:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:15.808 08:57:46 keyring_file -- keyring/file.sh@78 -- # jq length 00:24:15.808 08:57:46 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:24:15.808 08:57:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.808 08:57:46 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:24:15.808 08:57:46 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.uOOBWUy3mx 00:24:15.808 08:57:46 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uOOBWUy3mx 00:24:15.808 08:57:46 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:15.808 08:57:46 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uOOBWUy3mx 00:24:15.808 08:57:46 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:15.808 08:57:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.808 08:57:46 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:15.808 08:57:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.808 08:57:46 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uOOBWUy3mx 00:24:15.808 08:57:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uOOBWUy3mx 00:24:16.067 [2024-11-20 08:57:46.936331] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uOOBWUy3mx': 0100660 00:24:16.067 [2024-11-20 08:57:46.936392] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:16.067 request: 00:24:16.067 { 00:24:16.067 "name": "key0", 00:24:16.067 "path": "/tmp/tmp.uOOBWUy3mx", 00:24:16.067 "method": "keyring_file_add_key", 00:24:16.067 "req_id": 1 00:24:16.067 } 00:24:16.067 Got JSON-RPC error response 00:24:16.067 response: 00:24:16.067 { 00:24:16.067 "code": -1, 00:24:16.067 "message": "Operation not permitted" 00:24:16.067 } 00:24:16.067 08:57:46 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:16.067 08:57:46 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:16.067 08:57:46 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:16.067 08:57:46 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:16.067 08:57:46 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.uOOBWUy3mx 00:24:16.067 08:57:46 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uOOBWUy3mx 00:24:16.067 08:57:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uOOBWUy3mx 00:24:16.326 08:57:47 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.uOOBWUy3mx 00:24:16.326 08:57:47 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:24:16.326 08:57:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:16.326 08:57:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:16.326 08:57:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:16.326 08:57:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:16.326 08:57:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:16.585 08:57:47 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:24:16.585 08:57:47 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:16.585 08:57:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:16.585 08:57:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:16.585 08:57:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:16.585 08:57:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.585 08:57:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:16.585 08:57:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.585 08:57:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:16.585 08:57:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:16.844 [2024-11-20 08:57:47.652534] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uOOBWUy3mx': No such file or directory 00:24:16.844 [2024-11-20 08:57:47.652594] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:16.844 [2024-11-20 08:57:47.652666] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:16.844 [2024-11-20 08:57:47.652677] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:24:16.844 [2024-11-20 08:57:47.652688] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:16.844 [2024-11-20 08:57:47.652697] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:16.844 request: 00:24:16.844 { 00:24:16.844 "name": "nvme0", 00:24:16.844 "trtype": "tcp", 00:24:16.844 "traddr": "127.0.0.1", 00:24:16.844 "adrfam": "ipv4", 00:24:16.844 "trsvcid": "4420", 00:24:16.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.844 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:16.844 "prchk_reftag": false, 00:24:16.844 "prchk_guard": false, 00:24:16.844 "hdgst": false, 00:24:16.844 "ddgst": false, 00:24:16.844 "psk": "key0", 00:24:16.844 "allow_unrecognized_csi": false, 00:24:16.844 "method": "bdev_nvme_attach_controller", 00:24:16.844 "req_id": 1 00:24:16.844 } 00:24:16.844 Got JSON-RPC error response 00:24:16.844 response: 00:24:16.844 { 00:24:16.844 "code": -19, 00:24:16.844 "message": "No such device" 00:24:16.844 } 00:24:16.844 08:57:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:16.844 08:57:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:16.844 08:57:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:16.844 08:57:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:16.844 08:57:47 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:24:16.844 08:57:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:17.103 08:57:47 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:17.103 08:57:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:17.103 08:57:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:17.103 08:57:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:17.103 08:57:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:17.103 08:57:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:17.103 08:57:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KWSlefFvS3 00:24:17.103 08:57:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:17.103 08:57:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:17.103 08:57:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:17.103 08:57:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:17.103 08:57:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:17.103 08:57:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:17.103 08:57:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:17.103 08:57:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KWSlefFvS3 00:24:17.103 08:57:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KWSlefFvS3 00:24:17.103 08:57:48 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.KWSlefFvS3 00:24:17.103 08:57:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KWSlefFvS3 00:24:17.103 08:57:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KWSlefFvS3 00:24:17.671 08:57:48 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:17.671 08:57:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:17.671 nvme0n1 00:24:17.930 08:57:48 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:24:17.930 08:57:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:17.930 08:57:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:17.930 08:57:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:17.930 08:57:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:17.930 08:57:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.930 08:57:48 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:24:17.930 08:57:48 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:24:17.930 08:57:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:18.190 08:57:49 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:24:18.190 08:57:49 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:24:18.190 08:57:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:18.190 08:57:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:18.190 08:57:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:18.480 08:57:49 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:24:18.480 08:57:49 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:24:18.480 08:57:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:18.480 08:57:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:18.480 08:57:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:18.480 08:57:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:18.480 08:57:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:18.752 08:57:49 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:24:18.752 08:57:49 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:18.752 08:57:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:19.012 08:57:49 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:24:19.012 08:57:49 keyring_file -- keyring/file.sh@105 -- # jq length 00:24:19.012 08:57:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.271 08:57:50 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:24:19.271 08:57:50 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KWSlefFvS3 00:24:19.271 08:57:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KWSlefFvS3 00:24:19.840 08:57:50 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Qjc0torqfE 00:24:19.840 08:57:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Qjc0torqfE 00:24:19.840 08:57:50 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:19.840 08:57:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:20.099 nvme0n1 00:24:20.358 08:57:51 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:24:20.358 08:57:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:20.618 08:57:51 keyring_file -- keyring/file.sh@113 -- # config='{ 00:24:20.618 "subsystems": [ 00:24:20.618 { 00:24:20.618 "subsystem": "keyring", 00:24:20.618 "config": [ 00:24:20.618 { 00:24:20.618 "method": "keyring_file_add_key", 00:24:20.618 "params": { 00:24:20.618 "name": "key0", 00:24:20.618 "path": "/tmp/tmp.KWSlefFvS3" 00:24:20.618 } 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "method": "keyring_file_add_key", 00:24:20.618 "params": { 00:24:20.618 "name": "key1", 00:24:20.618 "path": "/tmp/tmp.Qjc0torqfE" 00:24:20.618 } 00:24:20.618 } 00:24:20.618 ] 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "subsystem": "iobuf", 00:24:20.618 "config": [ 00:24:20.618 { 00:24:20.618 "method": "iobuf_set_options", 00:24:20.618 "params": { 00:24:20.618 "small_pool_count": 8192, 00:24:20.618 "large_pool_count": 1024, 00:24:20.618 "small_bufsize": 8192, 00:24:20.618 "large_bufsize": 135168, 00:24:20.618 "enable_numa": false 00:24:20.618 } 00:24:20.618 } 00:24:20.618 ] 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "subsystem": "sock", 00:24:20.618 "config": [ 00:24:20.618 { 00:24:20.618 "method": "sock_set_default_impl", 00:24:20.618 "params": { 00:24:20.618 "impl_name": "uring" 00:24:20.618 } 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "method": "sock_impl_set_options", 00:24:20.618 "params": { 00:24:20.618 "impl_name": "ssl", 00:24:20.618 "recv_buf_size": 4096, 00:24:20.618 "send_buf_size": 4096, 00:24:20.618 "enable_recv_pipe": true, 00:24:20.618 "enable_quickack": false, 00:24:20.618 "enable_placement_id": 0, 00:24:20.618 "enable_zerocopy_send_server": true, 00:24:20.618 "enable_zerocopy_send_client": false, 00:24:20.618 "zerocopy_threshold": 0, 00:24:20.618 "tls_version": 0, 00:24:20.618 "enable_ktls": false 00:24:20.618 } 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "method": "sock_impl_set_options", 00:24:20.618 "params": { 00:24:20.618 "impl_name": "posix", 00:24:20.618 "recv_buf_size": 2097152, 00:24:20.618 "send_buf_size": 2097152, 00:24:20.618 "enable_recv_pipe": true, 00:24:20.618 "enable_quickack": false, 00:24:20.618 "enable_placement_id": 0, 00:24:20.618 "enable_zerocopy_send_server": true, 00:24:20.618 "enable_zerocopy_send_client": false, 00:24:20.618 "zerocopy_threshold": 0, 00:24:20.618 "tls_version": 0, 00:24:20.618 "enable_ktls": false 00:24:20.618 } 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "method": "sock_impl_set_options", 00:24:20.618 "params": { 00:24:20.618 "impl_name": "uring", 00:24:20.618 "recv_buf_size": 2097152, 00:24:20.618 "send_buf_size": 2097152, 00:24:20.618 "enable_recv_pipe": true, 00:24:20.618 "enable_quickack": false, 00:24:20.618 "enable_placement_id": 0, 00:24:20.618 "enable_zerocopy_send_server": false, 00:24:20.618 "enable_zerocopy_send_client": false, 00:24:20.618 "zerocopy_threshold": 0, 00:24:20.618 "tls_version": 0, 00:24:20.618 "enable_ktls": false 00:24:20.618 } 00:24:20.618 } 00:24:20.618 ] 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "subsystem": "vmd", 00:24:20.618 "config": [] 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "subsystem": "accel", 00:24:20.618 "config": [ 00:24:20.618 { 00:24:20.618 "method": "accel_set_options", 00:24:20.618 "params": { 00:24:20.618 "small_cache_size": 128, 00:24:20.618 "large_cache_size": 16, 00:24:20.618 "task_count": 2048, 00:24:20.618 "sequence_count": 2048, 00:24:20.618 "buf_count": 2048 00:24:20.618 } 00:24:20.618 } 00:24:20.618 ] 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "subsystem": "bdev", 00:24:20.618 "config": [ 00:24:20.618 { 00:24:20.618 "method": "bdev_set_options", 00:24:20.618 "params": { 00:24:20.618 "bdev_io_pool_size": 65535, 00:24:20.618 "bdev_io_cache_size": 256, 00:24:20.618 "bdev_auto_examine": true, 00:24:20.618 "iobuf_small_cache_size": 128, 00:24:20.618 "iobuf_large_cache_size": 16 00:24:20.618 } 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "method": "bdev_raid_set_options", 00:24:20.618 "params": { 00:24:20.618 "process_window_size_kb": 1024, 00:24:20.618 "process_max_bandwidth_mb_sec": 0 00:24:20.618 } 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "method": "bdev_iscsi_set_options", 00:24:20.618 "params": { 00:24:20.618 "timeout_sec": 30 00:24:20.618 } 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "method": "bdev_nvme_set_options", 00:24:20.618 "params": { 00:24:20.618 "action_on_timeout": "none", 00:24:20.618 "timeout_us": 0, 00:24:20.618 "timeout_admin_us": 0, 00:24:20.618 "keep_alive_timeout_ms": 10000, 00:24:20.618 "arbitration_burst": 0, 00:24:20.618 "low_priority_weight": 0, 00:24:20.618 "medium_priority_weight": 0, 00:24:20.618 "high_priority_weight": 0, 00:24:20.618 "nvme_adminq_poll_period_us": 10000, 00:24:20.618 "nvme_ioq_poll_period_us": 0, 00:24:20.618 "io_queue_requests": 512, 00:24:20.618 "delay_cmd_submit": true, 00:24:20.618 "transport_retry_count": 4, 00:24:20.618 "bdev_retry_count": 3, 00:24:20.618 "transport_ack_timeout": 0, 00:24:20.618 "ctrlr_loss_timeout_sec": 0, 00:24:20.618 "reconnect_delay_sec": 0, 00:24:20.618 "fast_io_fail_timeout_sec": 0, 00:24:20.618 "disable_auto_failback": false, 00:24:20.618 "generate_uuids": false, 00:24:20.618 "transport_tos": 0, 00:24:20.618 "nvme_error_stat": false, 00:24:20.618 "rdma_srq_size": 0, 00:24:20.618 "io_path_stat": false, 00:24:20.618 "allow_accel_sequence": false, 00:24:20.618 "rdma_max_cq_size": 0, 00:24:20.618 "rdma_cm_event_timeout_ms": 0, 00:24:20.618 "dhchap_digests": [ 00:24:20.618 "sha256", 00:24:20.618 "sha384", 00:24:20.618 "sha512" 00:24:20.618 ], 00:24:20.618 "dhchap_dhgroups": [ 00:24:20.618 "null", 00:24:20.618 "ffdhe2048", 00:24:20.618 "ffdhe3072", 00:24:20.618 "ffdhe4096", 00:24:20.618 "ffdhe6144", 00:24:20.618 "ffdhe8192" 00:24:20.618 ] 00:24:20.618 } 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "method": "bdev_nvme_attach_controller", 00:24:20.618 "params": { 00:24:20.618 "name": "nvme0", 00:24:20.618 "trtype": "TCP", 00:24:20.618 "adrfam": "IPv4", 00:24:20.618 "traddr": "127.0.0.1", 00:24:20.618 "trsvcid": "4420", 00:24:20.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:20.618 "prchk_reftag": false, 00:24:20.618 "prchk_guard": false, 00:24:20.618 "ctrlr_loss_timeout_sec": 0, 00:24:20.618 "reconnect_delay_sec": 0, 00:24:20.618 "fast_io_fail_timeout_sec": 0, 00:24:20.618 "psk": "key0", 00:24:20.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:20.618 "hdgst": false, 00:24:20.618 "ddgst": false, 00:24:20.618 "multipath": "multipath" 00:24:20.618 } 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "method": "bdev_nvme_set_hotplug", 00:24:20.618 "params": { 00:24:20.618 "period_us": 100000, 00:24:20.618 "enable": false 00:24:20.618 } 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "method": "bdev_wait_for_examine" 00:24:20.618 } 00:24:20.618 ] 00:24:20.618 }, 00:24:20.618 { 00:24:20.618 "subsystem": "nbd", 00:24:20.618 "config": [] 00:24:20.618 } 00:24:20.618 ] 00:24:20.618 }' 00:24:20.618 08:57:51 keyring_file -- keyring/file.sh@115 -- # killprocess 85883 00:24:20.618 08:57:51 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85883 ']' 00:24:20.618 08:57:51 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85883 00:24:20.618 08:57:51 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:20.619 08:57:51 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.619 08:57:51 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85883 00:24:20.619 killing process with pid 85883 00:24:20.619 Received shutdown signal, test time was about 1.000000 seconds 00:24:20.619 00:24:20.619 Latency(us) 00:24:20.619 [2024-11-20T08:57:51.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.619 [2024-11-20T08:57:51.534Z] =================================================================================================================== 00:24:20.619 [2024-11-20T08:57:51.534Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.619 08:57:51 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:20.619 08:57:51 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:20.619 08:57:51 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85883' 00:24:20.619 08:57:51 keyring_file -- common/autotest_common.sh@973 -- # kill 85883 00:24:20.619 08:57:51 keyring_file -- common/autotest_common.sh@978 -- # wait 85883 00:24:20.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:20.879 08:57:51 keyring_file -- keyring/file.sh@118 -- # bperfpid=86126 00:24:20.879 08:57:51 keyring_file -- keyring/file.sh@120 -- # waitforlisten 86126 /var/tmp/bperf.sock 00:24:20.879 08:57:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 86126 ']' 00:24:20.879 08:57:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:20.879 08:57:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.879 08:57:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:20.879 08:57:51 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:20.879 08:57:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.879 08:57:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:20.879 08:57:51 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:24:20.879 "subsystems": [ 00:24:20.879 { 00:24:20.879 "subsystem": "keyring", 00:24:20.879 "config": [ 00:24:20.879 { 00:24:20.879 "method": "keyring_file_add_key", 00:24:20.879 "params": { 00:24:20.879 "name": "key0", 00:24:20.879 "path": "/tmp/tmp.KWSlefFvS3" 00:24:20.879 } 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "method": "keyring_file_add_key", 00:24:20.879 "params": { 00:24:20.879 "name": "key1", 00:24:20.879 "path": "/tmp/tmp.Qjc0torqfE" 00:24:20.879 } 00:24:20.879 } 00:24:20.879 ] 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "subsystem": "iobuf", 00:24:20.879 "config": [ 00:24:20.879 { 00:24:20.879 "method": "iobuf_set_options", 00:24:20.879 "params": { 00:24:20.879 "small_pool_count": 8192, 00:24:20.879 "large_pool_count": 1024, 00:24:20.879 "small_bufsize": 8192, 00:24:20.879 "large_bufsize": 135168, 00:24:20.879 "enable_numa": false 00:24:20.879 } 00:24:20.879 } 00:24:20.879 ] 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "subsystem": "sock", 00:24:20.879 "config": [ 00:24:20.879 { 00:24:20.879 "method": "sock_set_default_impl", 00:24:20.879 "params": { 00:24:20.879 "impl_name": "uring" 00:24:20.879 } 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "method": "sock_impl_set_options", 00:24:20.879 "params": { 00:24:20.879 "impl_name": "ssl", 00:24:20.879 "recv_buf_size": 4096, 00:24:20.879 "send_buf_size": 4096, 00:24:20.879 "enable_recv_pipe": true, 00:24:20.879 "enable_quickack": false, 00:24:20.879 "enable_placement_id": 0, 00:24:20.879 "enable_zerocopy_send_server": true, 00:24:20.879 "enable_zerocopy_send_client": false, 00:24:20.879 "zerocopy_threshold": 0, 00:24:20.879 "tls_version": 0, 00:24:20.879 "enable_ktls": false 00:24:20.879 } 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "method": "sock_impl_set_options", 00:24:20.879 "params": { 00:24:20.879 "impl_name": "posix", 00:24:20.879 "recv_buf_size": 2097152, 00:24:20.879 "send_buf_size": 2097152, 00:24:20.879 "enable_recv_pipe": true, 00:24:20.879 "enable_quickack": false, 00:24:20.879 "enable_placement_id": 0, 00:24:20.879 "enable_zerocopy_send_server": true, 00:24:20.879 "enable_zerocopy_send_client": false, 00:24:20.879 "zerocopy_threshold": 0, 00:24:20.879 "tls_version": 0, 00:24:20.879 "enable_ktls": false 00:24:20.879 } 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "method": "sock_impl_set_options", 00:24:20.879 "params": { 00:24:20.879 "impl_name": "uring", 00:24:20.879 "recv_buf_size": 2097152, 00:24:20.879 "send_buf_size": 2097152, 00:24:20.879 "enable_recv_pipe": true, 00:24:20.879 "enable_quickack": false, 00:24:20.879 "enable_placement_id": 0, 00:24:20.879 "enable_zerocopy_send_server": false, 00:24:20.879 "enable_zerocopy_send_client": false, 00:24:20.879 "zerocopy_threshold": 0, 00:24:20.879 "tls_version": 0, 00:24:20.879 "enable_ktls": false 00:24:20.879 } 00:24:20.879 } 00:24:20.879 ] 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "subsystem": "vmd", 00:24:20.879 "config": [] 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "subsystem": "accel", 00:24:20.879 "config": [ 00:24:20.879 { 00:24:20.879 "method": "accel_set_options", 00:24:20.879 "params": { 00:24:20.879 "small_cache_size": 128, 00:24:20.879 "large_cache_size": 16, 00:24:20.879 "task_count": 2048, 00:24:20.879 "sequence_count": 2048, 00:24:20.879 "buf_count": 2048 00:24:20.879 } 00:24:20.879 } 00:24:20.879 ] 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "subsystem": "bdev", 00:24:20.879 "config": [ 00:24:20.879 { 00:24:20.879 "method": "bdev_set_options", 00:24:20.879 "params": { 00:24:20.879 "bdev_io_pool_size": 65535, 00:24:20.879 "bdev_io_cache_size": 256, 00:24:20.879 "bdev_auto_examine": true, 00:24:20.879 "iobuf_small_cache_size": 128, 00:24:20.879 "iobuf_large_cache_size": 16 00:24:20.879 } 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "method": "bdev_raid_set_options", 00:24:20.879 "params": { 00:24:20.879 "process_window_size_kb": 1024, 00:24:20.879 "process_max_bandwidth_mb_sec": 0 00:24:20.879 } 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "method": "bdev_iscsi_set_options", 00:24:20.879 "params": { 00:24:20.879 "timeout_sec": 30 00:24:20.879 } 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "method": "bdev_nvme_set_options", 00:24:20.879 "params": { 00:24:20.879 "action_on_timeout": "none", 00:24:20.879 "timeout_us": 0, 00:24:20.879 "timeout_admin_us": 0, 00:24:20.879 "keep_alive_timeout_ms": 10000, 00:24:20.879 "arbitration_burst": 0, 00:24:20.879 "low_priority_weight": 0, 00:24:20.879 "medium_priority_weight": 0, 00:24:20.879 "high_priority_weight": 0, 00:24:20.879 "nvme_adminq_poll_period_us": 10000, 00:24:20.879 "nvme_ioq_poll_period_us": 0, 00:24:20.879 "io_queue_requests": 512, 00:24:20.879 "delay_cmd_submit": true, 00:24:20.879 "transport_retry_count": 4, 00:24:20.879 "bdev_retry_count": 3, 00:24:20.879 "transport_ack_timeout": 0, 00:24:20.879 "ctrlr_loss_timeout_sec": 0, 00:24:20.879 "reconnect_delay_sec": 0, 00:24:20.879 "fast_io_fail_timeout_sec": 0, 00:24:20.879 "disable_auto_failback": false, 00:24:20.879 "generate_uuids": false, 00:24:20.879 "transport_tos": 0, 00:24:20.879 "nvme_error_stat": false, 00:24:20.879 "rdma_srq_size": 0, 00:24:20.879 "io_path_stat": false, 00:24:20.879 "allow_accel_sequence": false, 00:24:20.879 "rdma_max_cq_size": 0, 00:24:20.879 "rdma_cm_event_timeout_ms": 0, 00:24:20.879 "dhchap_digests": [ 00:24:20.879 "sha256", 00:24:20.879 "sha384", 00:24:20.879 "sha512" 00:24:20.879 ], 00:24:20.879 "dhchap_dhgroups": [ 00:24:20.879 "null", 00:24:20.879 "ffdhe2048", 00:24:20.879 "ffdhe3072", 00:24:20.879 "ffdhe4096", 00:24:20.879 "ffdhe6144", 00:24:20.879 "ffdhe8192" 00:24:20.879 ] 00:24:20.879 } 00:24:20.879 }, 00:24:20.879 { 00:24:20.879 "method": "bdev_nvme_attach_controller", 00:24:20.879 "params": { 00:24:20.879 "name": "nvme0", 00:24:20.879 "trtype": "TCP", 00:24:20.879 "adrfam": "IPv4", 00:24:20.879 "traddr": "127.0.0.1", 00:24:20.879 "trsvcid": "4420", 00:24:20.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:20.879 "prchk_reftag": false, 00:24:20.879 "prchk_guard": false, 00:24:20.880 "ctrlr_loss_timeout_sec": 0, 00:24:20.880 "reconnect_delay_sec": 0, 00:24:20.880 "fast_io_fail_timeout_sec": 0, 00:24:20.880 "psk": "key0", 00:24:20.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:20.880 "hdgst": false, 00:24:20.880 "ddgst": false, 00:24:20.880 "multipath": "multipath" 00:24:20.880 } 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "method": "bdev_nvme_set_hotplug", 00:24:20.880 "params": { 00:24:20.880 "period_us": 100000, 00:24:20.880 "enable": false 00:24:20.880 } 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "method": "bdev_wait_for_examine" 00:24:20.880 } 00:24:20.880 ] 00:24:20.880 }, 00:24:20.880 { 00:24:20.880 "subsystem": "nbd", 00:24:20.880 "config": [] 00:24:20.880 } 00:24:20.880 ] 00:24:20.880 }' 00:24:20.880 [2024-11-20 08:57:51.654533] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:24:20.880 [2024-11-20 08:57:51.654866] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86126 ] 00:24:21.139 [2024-11-20 08:57:51.803404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.139 [2024-11-20 08:57:51.892670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.139 [2024-11-20 08:57:52.050262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:21.397 [2024-11-20 08:57:52.121452] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.966 08:57:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.966 08:57:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:21.966 08:57:52 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:24:21.966 08:57:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:21.966 08:57:52 keyring_file -- keyring/file.sh@121 -- # jq length 00:24:22.225 08:57:52 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:22.225 08:57:52 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:24:22.225 08:57:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:22.225 08:57:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:22.225 08:57:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:22.225 08:57:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:22.225 08:57:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:22.484 08:57:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:24:22.484 08:57:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:24:22.484 08:57:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:22.484 08:57:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:22.484 08:57:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:22.484 08:57:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:22.484 08:57:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:22.744 08:57:53 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:24:22.744 08:57:53 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:24:22.744 08:57:53 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:24:22.744 08:57:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:23.003 08:57:53 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:24:23.003 08:57:53 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:23.003 08:57:53 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.KWSlefFvS3 /tmp/tmp.Qjc0torqfE 00:24:23.003 08:57:53 keyring_file -- keyring/file.sh@20 -- # killprocess 86126 00:24:23.003 08:57:53 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 86126 ']' 00:24:23.003 08:57:53 keyring_file -- common/autotest_common.sh@958 -- # kill -0 86126 00:24:23.003 08:57:53 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:23.003 08:57:53 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.003 08:57:53 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86126 00:24:23.003 killing process with pid 86126 00:24:23.003 Received shutdown signal, test time was about 1.000000 seconds 00:24:23.003 00:24:23.003 Latency(us) 00:24:23.003 [2024-11-20T08:57:53.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.003 [2024-11-20T08:57:53.918Z] =================================================================================================================== 00:24:23.003 [2024-11-20T08:57:53.918Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:23.003 08:57:53 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:23.003 08:57:53 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:23.003 08:57:53 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86126' 00:24:23.003 08:57:53 keyring_file -- common/autotest_common.sh@973 -- # kill 86126 00:24:23.003 08:57:53 keyring_file -- common/autotest_common.sh@978 -- # wait 86126 00:24:23.263 08:57:54 keyring_file -- keyring/file.sh@21 -- # killprocess 85872 00:24:23.263 08:57:54 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85872 ']' 00:24:23.263 08:57:54 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85872 00:24:23.263 08:57:54 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:23.263 08:57:54 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.263 08:57:54 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85872 00:24:23.263 killing process with pid 85872 00:24:23.263 08:57:54 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.263 08:57:54 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.263 08:57:54 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85872' 00:24:23.263 08:57:54 keyring_file -- common/autotest_common.sh@973 -- # kill 85872 00:24:23.263 08:57:54 keyring_file -- common/autotest_common.sh@978 -- # wait 85872 00:24:23.831 00:24:23.831 real 0m15.468s 00:24:23.831 user 0m38.609s 00:24:23.831 sys 0m3.284s 00:24:23.831 08:57:54 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.831 ************************************ 00:24:23.831 END TEST keyring_file 00:24:23.831 ************************************ 00:24:23.831 08:57:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:23.831 08:57:54 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:24:23.831 08:57:54 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:23.831 08:57:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.831 08:57:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.831 08:57:54 -- common/autotest_common.sh@10 -- # set +x 00:24:23.831 ************************************ 00:24:23.831 START TEST keyring_linux 00:24:23.831 ************************************ 00:24:23.831 08:57:54 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:23.831 Joined session keyring: 995012965 00:24:24.091 * Looking for test storage... 00:24:24.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:24.091 08:57:54 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:24.091 08:57:54 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:24:24.091 08:57:54 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:24.091 08:57:54 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@345 -- # : 1 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:24:24.091 08:57:54 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@368 -- # return 0 00:24:24.092 08:57:54 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.092 08:57:54 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:24.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.092 --rc genhtml_branch_coverage=1 00:24:24.092 --rc genhtml_function_coverage=1 00:24:24.092 --rc genhtml_legend=1 00:24:24.092 --rc geninfo_all_blocks=1 00:24:24.092 --rc geninfo_unexecuted_blocks=1 00:24:24.092 00:24:24.092 ' 00:24:24.092 08:57:54 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:24.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.092 --rc genhtml_branch_coverage=1 00:24:24.092 --rc genhtml_function_coverage=1 00:24:24.092 --rc genhtml_legend=1 00:24:24.092 --rc geninfo_all_blocks=1 00:24:24.092 --rc geninfo_unexecuted_blocks=1 00:24:24.092 00:24:24.092 ' 00:24:24.092 08:57:54 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:24.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.092 --rc genhtml_branch_coverage=1 00:24:24.092 --rc genhtml_function_coverage=1 00:24:24.092 --rc genhtml_legend=1 00:24:24.092 --rc geninfo_all_blocks=1 00:24:24.092 --rc geninfo_unexecuted_blocks=1 00:24:24.092 00:24:24.092 ' 00:24:24.092 08:57:54 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:24.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.092 --rc genhtml_branch_coverage=1 00:24:24.092 --rc genhtml_function_coverage=1 00:24:24.092 --rc genhtml_legend=1 00:24:24.092 --rc geninfo_all_blocks=1 00:24:24.092 --rc geninfo_unexecuted_blocks=1 00:24:24.092 00:24:24.092 ' 00:24:24.092 08:57:54 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=0afca7d9-79fe-4335-8b1a-f5cf8c0a3edb 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.092 08:57:54 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.092 08:57:54 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.092 08:57:54 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.092 08:57:54 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.092 08:57:54 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:24.092 08:57:54 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.092 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:24.092 08:57:54 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:24.092 08:57:54 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:24.092 08:57:54 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:24.092 08:57:54 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:24.092 08:57:54 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:24.092 08:57:54 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:24.092 /tmp/:spdk-test:key0 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:24.092 08:57:54 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:24.092 08:57:54 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:24.092 08:57:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:24.092 /tmp/:spdk-test:key1 00:24:24.092 08:57:54 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=86254 00:24:24.092 08:57:54 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:24.092 08:57:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 86254 00:24:24.092 08:57:55 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86254 ']' 00:24:24.092 08:57:55 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.092 08:57:55 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.092 08:57:55 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.093 08:57:55 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.093 08:57:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:24.353 [2024-11-20 08:57:55.110002] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:24:24.353 [2024-11-20 08:57:55.110877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86254 ] 00:24:24.611 [2024-11-20 08:57:55.267870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.611 [2024-11-20 08:57:55.342821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.612 [2024-11-20 08:57:55.440025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:25.180 08:57:56 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.180 08:57:56 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:24:25.180 08:57:56 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:25.180 08:57:56 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.180 08:57:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:25.180 [2024-11-20 08:57:56.090932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.439 null0 00:24:25.439 [2024-11-20 08:57:56.122901] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:25.439 [2024-11-20 08:57:56.123099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:25.439 08:57:56 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.439 08:57:56 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:25.439 559688404 00:24:25.439 08:57:56 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:25.439 150511680 00:24:25.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:25.439 08:57:56 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=86272 00:24:25.439 08:57:56 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 86272 /var/tmp/bperf.sock 00:24:25.439 08:57:56 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:25.439 08:57:56 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86272 ']' 00:24:25.439 08:57:56 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:25.439 08:57:56 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.439 08:57:56 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:25.439 08:57:56 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.439 08:57:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:25.439 [2024-11-20 08:57:56.201066] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:24:25.439 [2024-11-20 08:57:56.201149] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86272 ] 00:24:25.439 [2024-11-20 08:57:56.347502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.698 [2024-11-20 08:57:56.419124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.635 08:57:57 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.635 08:57:57 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:24:26.635 08:57:57 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:26.635 08:57:57 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:26.635 08:57:57 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:26.635 08:57:57 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:26.894 [2024-11-20 08:57:57.759652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:27.153 08:57:57 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:27.153 08:57:57 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:27.412 [2024-11-20 08:57:58.099229] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.412 nvme0n1 00:24:27.412 08:57:58 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:27.412 08:57:58 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:27.412 08:57:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:27.412 08:57:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:27.412 08:57:58 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:27.412 08:57:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:27.671 08:57:58 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:27.671 08:57:58 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:27.671 08:57:58 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:27.671 08:57:58 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:27.671 08:57:58 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:27.671 08:57:58 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:27.671 08:57:58 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:27.930 08:57:58 keyring_linux -- keyring/linux.sh@25 -- # sn=559688404 00:24:27.930 08:57:58 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:27.930 08:57:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:27.930 08:57:58 keyring_linux -- keyring/linux.sh@26 -- # [[ 559688404 == \5\5\9\6\8\8\4\0\4 ]] 00:24:27.930 08:57:58 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 559688404 00:24:27.930 08:57:58 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:27.930 08:57:58 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:27.930 Running I/O for 1 seconds... 00:24:29.332 11942.00 IOPS, 46.65 MiB/s 00:24:29.332 Latency(us) 00:24:29.332 [2024-11-20T08:58:00.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.332 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:29.332 nvme0n1 : 1.01 11949.96 46.68 0.00 0.00 10652.49 3098.07 18826.71 00:24:29.332 [2024-11-20T08:58:00.247Z] =================================================================================================================== 00:24:29.332 [2024-11-20T08:58:00.247Z] Total : 11949.96 46.68 0.00 0.00 10652.49 3098.07 18826.71 00:24:29.332 { 00:24:29.332 "results": [ 00:24:29.332 { 00:24:29.332 "job": "nvme0n1", 00:24:29.332 "core_mask": "0x2", 00:24:29.332 "workload": "randread", 00:24:29.332 "status": "finished", 00:24:29.332 "queue_depth": 128, 00:24:29.332 "io_size": 4096, 00:24:29.332 "runtime": 1.010045, 00:24:29.332 "iops": 11949.96262542758, 00:24:29.332 "mibps": 46.67954150557649, 00:24:29.332 "io_failed": 0, 00:24:29.332 "io_timeout": 0, 00:24:29.332 "avg_latency_us": 10652.492528131355, 00:24:29.332 "min_latency_us": 3098.0654545454545, 00:24:29.332 "max_latency_us": 18826.705454545456 00:24:29.332 } 00:24:29.332 ], 00:24:29.332 "core_count": 1 00:24:29.332 } 00:24:29.332 08:57:59 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:29.332 08:57:59 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:29.332 08:58:00 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:29.332 08:58:00 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:29.332 08:58:00 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:29.332 08:58:00 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:29.332 08:58:00 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:29.332 08:58:00 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:29.592 08:58:00 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:29.592 08:58:00 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:29.592 08:58:00 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:29.592 08:58:00 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:29.592 08:58:00 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:24:29.592 08:58:00 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:29.592 08:58:00 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:29.592 08:58:00 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:29.592 08:58:00 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:29.592 08:58:00 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:29.592 08:58:00 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:29.592 08:58:00 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:29.851 [2024-11-20 08:58:00.726513] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:29.851 [2024-11-20 08:58:00.726789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cf5d0 (107): Transport endpoint is not connected 00:24:29.851 [2024-11-20 08:58:00.727780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cf5d0 (9): Bad file descriptor 00:24:29.851 [2024-11-20 08:58:00.728778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:24:29.851 [2024-11-20 08:58:00.728809] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:29.851 [2024-11-20 08:58:00.728822] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:29.851 [2024-11-20 08:58:00.728835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:24:29.851 request: 00:24:29.851 { 00:24:29.851 "name": "nvme0", 00:24:29.851 "trtype": "tcp", 00:24:29.851 "traddr": "127.0.0.1", 00:24:29.851 "adrfam": "ipv4", 00:24:29.851 "trsvcid": "4420", 00:24:29.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:29.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:29.851 "prchk_reftag": false, 00:24:29.851 "prchk_guard": false, 00:24:29.851 "hdgst": false, 00:24:29.851 "ddgst": false, 00:24:29.851 "psk": ":spdk-test:key1", 00:24:29.851 "allow_unrecognized_csi": false, 00:24:29.851 "method": "bdev_nvme_attach_controller", 00:24:29.851 "req_id": 1 00:24:29.851 } 00:24:29.851 Got JSON-RPC error response 00:24:29.851 response: 00:24:29.851 { 00:24:29.851 "code": -5, 00:24:29.851 "message": "Input/output error" 00:24:29.851 } 00:24:29.851 08:58:00 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:24:29.851 08:58:00 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:29.851 08:58:00 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:29.851 08:58:00 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@33 -- # sn=559688404 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 559688404 00:24:29.852 1 links removed 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@33 -- # sn=150511680 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 150511680 00:24:29.852 1 links removed 00:24:29.852 08:58:00 keyring_linux -- keyring/linux.sh@41 -- # killprocess 86272 00:24:29.852 08:58:00 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86272 ']' 00:24:29.852 08:58:00 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86272 00:24:30.111 08:58:00 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:24:30.111 08:58:00 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.111 08:58:00 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86272 00:24:30.111 killing process with pid 86272 00:24:30.111 Received shutdown signal, test time was about 1.000000 seconds 00:24:30.111 00:24:30.111 Latency(us) 00:24:30.111 [2024-11-20T08:58:01.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.111 [2024-11-20T08:58:01.026Z] =================================================================================================================== 00:24:30.111 [2024-11-20T08:58:01.026Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.111 08:58:00 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:30.111 08:58:00 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:30.111 08:58:00 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86272' 00:24:30.111 08:58:00 keyring_linux -- common/autotest_common.sh@973 -- # kill 86272 00:24:30.111 08:58:00 keyring_linux -- common/autotest_common.sh@978 -- # wait 86272 00:24:30.372 08:58:01 keyring_linux -- keyring/linux.sh@42 -- # killprocess 86254 00:24:30.372 08:58:01 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86254 ']' 00:24:30.372 08:58:01 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86254 00:24:30.372 08:58:01 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:24:30.372 08:58:01 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.372 08:58:01 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86254 00:24:30.372 killing process with pid 86254 00:24:30.372 08:58:01 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:30.372 08:58:01 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:30.372 08:58:01 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86254' 00:24:30.372 08:58:01 keyring_linux -- common/autotest_common.sh@973 -- # kill 86254 00:24:30.372 08:58:01 keyring_linux -- common/autotest_common.sh@978 -- # wait 86254 00:24:30.941 00:24:30.941 real 0m6.949s 00:24:30.941 user 0m13.345s 00:24:30.941 sys 0m1.710s 00:24:30.941 08:58:01 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.941 ************************************ 00:24:30.941 08:58:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:30.941 END TEST keyring_linux 00:24:30.941 ************************************ 00:24:30.941 08:58:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:30.941 08:58:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:30.941 08:58:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:30.941 08:58:01 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:30.941 08:58:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:30.941 08:58:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:30.941 08:58:01 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:30.941 08:58:01 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:30.941 08:58:01 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:30.941 08:58:01 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:30.941 08:58:01 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:30.941 08:58:01 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:30.941 08:58:01 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:30.941 08:58:01 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:30.941 08:58:01 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:30.941 08:58:01 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:30.941 08:58:01 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:30.941 08:58:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.941 08:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:30.941 08:58:01 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:30.941 08:58:01 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:30.941 08:58:01 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:30.941 08:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:32.846 INFO: APP EXITING 00:24:32.846 INFO: killing all VMs 00:24:32.846 INFO: killing vhost app 00:24:32.846 INFO: EXIT DONE 00:24:33.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:33.415 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:33.415 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:33.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:33.983 Cleaning 00:24:33.983 Removing: /var/run/dpdk/spdk0/config 00:24:33.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:33.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:33.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:33.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:33.983 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:33.983 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:33.983 Removing: /var/run/dpdk/spdk1/config 00:24:33.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:33.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:33.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:33.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:33.983 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:33.983 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:33.983 Removing: /var/run/dpdk/spdk2/config 00:24:33.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:33.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:33.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:33.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:33.983 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:33.983 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:33.983 Removing: /var/run/dpdk/spdk3/config 00:24:33.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:33.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:34.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:34.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:34.242 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:34.242 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:34.242 Removing: /var/run/dpdk/spdk4/config 00:24:34.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:34.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:34.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:34.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:34.242 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:34.242 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:34.242 Removing: /dev/shm/nvmf_trace.0 00:24:34.242 Removing: /dev/shm/spdk_tgt_trace.pid56788 00:24:34.242 Removing: /var/run/dpdk/spdk0 00:24:34.242 Removing: /var/run/dpdk/spdk1 00:24:34.242 Removing: /var/run/dpdk/spdk2 00:24:34.242 Removing: /var/run/dpdk/spdk3 00:24:34.242 Removing: /var/run/dpdk/spdk4 00:24:34.242 Removing: /var/run/dpdk/spdk_pid56629 00:24:34.242 Removing: /var/run/dpdk/spdk_pid56788 00:24:34.242 Removing: /var/run/dpdk/spdk_pid56999 00:24:34.242 Removing: /var/run/dpdk/spdk_pid57086 00:24:34.242 Removing: /var/run/dpdk/spdk_pid57111 00:24:34.242 Removing: /var/run/dpdk/spdk_pid57221 00:24:34.242 Removing: /var/run/dpdk/spdk_pid57231 00:24:34.242 Removing: /var/run/dpdk/spdk_pid57371 00:24:34.242 Removing: /var/run/dpdk/spdk_pid57572 00:24:34.242 Removing: /var/run/dpdk/spdk_pid57726 00:24:34.242 Removing: /var/run/dpdk/spdk_pid57810 00:24:34.242 Removing: /var/run/dpdk/spdk_pid57900 00:24:34.242 Removing: /var/run/dpdk/spdk_pid57991 00:24:34.242 Removing: /var/run/dpdk/spdk_pid58069 00:24:34.242 Removing: /var/run/dpdk/spdk_pid58113 00:24:34.242 Removing: /var/run/dpdk/spdk_pid58143 00:24:34.242 Removing: /var/run/dpdk/spdk_pid58212 00:24:34.242 Removing: /var/run/dpdk/spdk_pid58288 00:24:34.242 Removing: /var/run/dpdk/spdk_pid58743 00:24:34.242 Removing: /var/run/dpdk/spdk_pid58788 00:24:34.242 Removing: /var/run/dpdk/spdk_pid58839 00:24:34.242 Removing: /var/run/dpdk/spdk_pid58847 00:24:34.242 Removing: /var/run/dpdk/spdk_pid58925 00:24:34.242 Removing: /var/run/dpdk/spdk_pid58941 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59014 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59030 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59081 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59099 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59139 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59155 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59291 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59321 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59409 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59754 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59766 00:24:34.242 Removing: /var/run/dpdk/spdk_pid59797 00:24:34.243 Removing: /var/run/dpdk/spdk_pid59816 00:24:34.243 Removing: /var/run/dpdk/spdk_pid59837 00:24:34.243 Removing: /var/run/dpdk/spdk_pid59858 00:24:34.243 Removing: /var/run/dpdk/spdk_pid59877 00:24:34.243 Removing: /var/run/dpdk/spdk_pid59888 00:24:34.243 Removing: /var/run/dpdk/spdk_pid59917 00:24:34.243 Removing: /var/run/dpdk/spdk_pid59931 00:24:34.243 Removing: /var/run/dpdk/spdk_pid59946 00:24:34.243 Removing: /var/run/dpdk/spdk_pid59971 00:24:34.243 Removing: /var/run/dpdk/spdk_pid59984 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60008 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60029 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60048 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60064 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60088 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60102 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60122 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60153 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60171 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60202 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60274 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60308 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60317 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60346 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60355 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60363 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60411 00:24:34.243 Removing: /var/run/dpdk/spdk_pid60424 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60453 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60468 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60478 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60487 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60502 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60511 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60521 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60536 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60565 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60591 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60606 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60640 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60644 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60657 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60703 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60709 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60741 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60754 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60756 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60769 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60778 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60784 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60797 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60805 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60887 00:24:34.502 Removing: /var/run/dpdk/spdk_pid60940 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61063 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61097 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61142 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61162 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61179 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61198 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61230 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61251 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61329 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61349 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61401 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61466 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61554 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61583 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61684 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61731 00:24:34.502 Removing: /var/run/dpdk/spdk_pid61769 00:24:34.502 Removing: /var/run/dpdk/spdk_pid62001 00:24:34.502 Removing: /var/run/dpdk/spdk_pid62093 00:24:34.502 Removing: /var/run/dpdk/spdk_pid62127 00:24:34.502 Removing: /var/run/dpdk/spdk_pid62151 00:24:34.502 Removing: /var/run/dpdk/spdk_pid62190 00:24:34.502 Removing: /var/run/dpdk/spdk_pid62223 00:24:34.502 Removing: /var/run/dpdk/spdk_pid62257 00:24:34.502 Removing: /var/run/dpdk/spdk_pid62294 00:24:34.502 Removing: /var/run/dpdk/spdk_pid62691 00:24:34.502 Removing: /var/run/dpdk/spdk_pid62729 00:24:34.502 Removing: /var/run/dpdk/spdk_pid63078 00:24:34.502 Removing: /var/run/dpdk/spdk_pid63547 00:24:34.502 Removing: /var/run/dpdk/spdk_pid63828 00:24:34.502 Removing: /var/run/dpdk/spdk_pid64702 00:24:34.503 Removing: /var/run/dpdk/spdk_pid65624 00:24:34.503 Removing: /var/run/dpdk/spdk_pid65741 00:24:34.503 Removing: /var/run/dpdk/spdk_pid65809 00:24:34.503 Removing: /var/run/dpdk/spdk_pid67239 00:24:34.503 Removing: /var/run/dpdk/spdk_pid67551 00:24:34.503 Removing: /var/run/dpdk/spdk_pid71438 00:24:34.503 Removing: /var/run/dpdk/spdk_pid71812 00:24:34.503 Removing: /var/run/dpdk/spdk_pid71921 00:24:34.503 Removing: /var/run/dpdk/spdk_pid72061 00:24:34.503 Removing: /var/run/dpdk/spdk_pid72090 00:24:34.503 Removing: /var/run/dpdk/spdk_pid72111 00:24:34.503 Removing: /var/run/dpdk/spdk_pid72141 00:24:34.503 Removing: /var/run/dpdk/spdk_pid72222 00:24:34.503 Removing: /var/run/dpdk/spdk_pid72362 00:24:34.503 Removing: /var/run/dpdk/spdk_pid72512 00:24:34.503 Removing: /var/run/dpdk/spdk_pid72592 00:24:34.503 Removing: /var/run/dpdk/spdk_pid72787 00:24:34.503 Removing: /var/run/dpdk/spdk_pid72864 00:24:34.503 Removing: /var/run/dpdk/spdk_pid72955 00:24:34.503 Removing: /var/run/dpdk/spdk_pid73314 00:24:34.503 Removing: /var/run/dpdk/spdk_pid73726 00:24:34.503 Removing: /var/run/dpdk/spdk_pid73727 00:24:34.503 Removing: /var/run/dpdk/spdk_pid73728 00:24:34.503 Removing: /var/run/dpdk/spdk_pid73996 00:24:34.503 Removing: /var/run/dpdk/spdk_pid74261 00:24:34.503 Removing: /var/run/dpdk/spdk_pid74663 00:24:34.503 Removing: /var/run/dpdk/spdk_pid74672 00:24:34.503 Removing: /var/run/dpdk/spdk_pid74989 00:24:34.503 Removing: /var/run/dpdk/spdk_pid75009 00:24:34.762 Removing: /var/run/dpdk/spdk_pid75023 00:24:34.762 Removing: /var/run/dpdk/spdk_pid75054 00:24:34.762 Removing: /var/run/dpdk/spdk_pid75064 00:24:34.762 Removing: /var/run/dpdk/spdk_pid75419 00:24:34.762 Removing: /var/run/dpdk/spdk_pid75466 00:24:34.762 Removing: /var/run/dpdk/spdk_pid75811 00:24:34.762 Removing: /var/run/dpdk/spdk_pid76006 00:24:34.762 Removing: /var/run/dpdk/spdk_pid76436 00:24:34.762 Removing: /var/run/dpdk/spdk_pid76995 00:24:34.762 Removing: /var/run/dpdk/spdk_pid77897 00:24:34.762 Removing: /var/run/dpdk/spdk_pid78519 00:24:34.762 Removing: /var/run/dpdk/spdk_pid78521 00:24:34.762 Removing: /var/run/dpdk/spdk_pid80545 00:24:34.762 Removing: /var/run/dpdk/spdk_pid80598 00:24:34.762 Removing: /var/run/dpdk/spdk_pid80664 00:24:34.762 Removing: /var/run/dpdk/spdk_pid80725 00:24:34.762 Removing: /var/run/dpdk/spdk_pid80846 00:24:34.762 Removing: /var/run/dpdk/spdk_pid80912 00:24:34.762 Removing: /var/run/dpdk/spdk_pid80967 00:24:34.762 Removing: /var/run/dpdk/spdk_pid81033 00:24:34.762 Removing: /var/run/dpdk/spdk_pid81408 00:24:34.762 Removing: /var/run/dpdk/spdk_pid82617 00:24:34.762 Removing: /var/run/dpdk/spdk_pid82766 00:24:34.762 Removing: /var/run/dpdk/spdk_pid83001 00:24:34.762 Removing: /var/run/dpdk/spdk_pid83608 00:24:34.762 Removing: /var/run/dpdk/spdk_pid83770 00:24:34.762 Removing: /var/run/dpdk/spdk_pid83930 00:24:34.762 Removing: /var/run/dpdk/spdk_pid84027 00:24:34.762 Removing: /var/run/dpdk/spdk_pid84195 00:24:34.762 Removing: /var/run/dpdk/spdk_pid84304 00:24:34.762 Removing: /var/run/dpdk/spdk_pid85014 00:24:34.763 Removing: /var/run/dpdk/spdk_pid85049 00:24:34.763 Removing: /var/run/dpdk/spdk_pid85079 00:24:34.763 Removing: /var/run/dpdk/spdk_pid85333 00:24:34.763 Removing: /var/run/dpdk/spdk_pid85368 00:24:34.763 Removing: /var/run/dpdk/spdk_pid85399 00:24:34.763 Removing: /var/run/dpdk/spdk_pid85872 00:24:34.763 Removing: /var/run/dpdk/spdk_pid85883 00:24:34.763 Removing: /var/run/dpdk/spdk_pid86126 00:24:34.763 Removing: /var/run/dpdk/spdk_pid86254 00:24:34.763 Removing: /var/run/dpdk/spdk_pid86272 00:24:34.763 Clean 00:24:34.763 08:58:05 -- common/autotest_common.sh@1453 -- # return 0 00:24:34.763 08:58:05 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:34.763 08:58:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.763 08:58:05 -- common/autotest_common.sh@10 -- # set +x 00:24:34.763 08:58:05 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:34.763 08:58:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.763 08:58:05 -- common/autotest_common.sh@10 -- # set +x 00:24:35.022 08:58:05 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:35.022 08:58:05 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:35.022 08:58:05 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:35.022 08:58:05 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:35.022 08:58:05 -- spdk/autotest.sh@398 -- # hostname 00:24:35.022 08:58:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:35.022 geninfo: WARNING: invalid characters removed from testname! 00:25:01.603 08:58:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:02.982 08:58:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:05.519 08:58:36 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:08.810 08:58:39 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:11.345 08:58:42 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:14.631 08:58:45 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:17.166 08:58:47 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:17.166 08:58:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:17.166 08:58:47 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:17.166 08:58:47 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:17.166 08:58:47 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:17.166 08:58:47 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:17.166 + [[ -n 5210 ]] 00:25:17.166 + sudo kill 5210 00:25:17.176 [Pipeline] } 00:25:17.191 [Pipeline] // timeout 00:25:17.196 [Pipeline] } 00:25:17.211 [Pipeline] // stage 00:25:17.218 [Pipeline] } 00:25:17.234 [Pipeline] // catchError 00:25:17.244 [Pipeline] stage 00:25:17.247 [Pipeline] { (Stop VM) 00:25:17.262 [Pipeline] sh 00:25:17.553 + vagrant halt 00:25:21.752 ==> default: Halting domain... 00:25:27.035 [Pipeline] sh 00:25:27.320 + vagrant destroy -f 00:25:31.533 ==> default: Removing domain... 00:25:31.546 [Pipeline] sh 00:25:31.843 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/output 00:25:31.851 [Pipeline] } 00:25:31.869 [Pipeline] // stage 00:25:31.874 [Pipeline] } 00:25:31.887 [Pipeline] // dir 00:25:31.892 [Pipeline] } 00:25:31.906 [Pipeline] // wrap 00:25:31.913 [Pipeline] } 00:25:31.925 [Pipeline] // catchError 00:25:31.933 [Pipeline] stage 00:25:31.935 [Pipeline] { (Epilogue) 00:25:31.947 [Pipeline] sh 00:25:32.227 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:38.803 [Pipeline] catchError 00:25:38.805 [Pipeline] { 00:25:38.819 [Pipeline] sh 00:25:39.101 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:39.101 Artifacts sizes are good 00:25:39.110 [Pipeline] } 00:25:39.124 [Pipeline] // catchError 00:25:39.136 [Pipeline] archiveArtifacts 00:25:39.144 Archiving artifacts 00:25:39.266 [Pipeline] cleanWs 00:25:39.281 [WS-CLEANUP] Deleting project workspace... 00:25:39.281 [WS-CLEANUP] Deferred wipeout is used... 00:25:39.288 [WS-CLEANUP] done 00:25:39.290 [Pipeline] } 00:25:39.306 [Pipeline] // stage 00:25:39.313 [Pipeline] } 00:25:39.328 [Pipeline] // node 00:25:39.335 [Pipeline] End of Pipeline 00:25:39.372 Finished: SUCCESS